You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@iceberg.apache.org by gi...@apache.org on 2023/05/31 21:01:32 UTC

[iceberg-docs] branch asf-site updated: deploy: 8bfa72ad1e1c53523e39ecec6d6bd504057e5002

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/iceberg-docs.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 49d2bd89 deploy: 8bfa72ad1e1c53523e39ecec6d6bd504057e5002
49d2bd89 is described below

commit 49d2bd89c8858835c494f5896c7c13a70055f810
Author: aokolnychyi <ao...@users.noreply.github.com>
AuthorDate: Wed May 31 21:01:26 2023 +0000

    deploy: 8bfa72ad1e1c53523e39ecec6d6bd504057e5002
---
 docs/1.3.0/api/index.html                        |  2 +-
 docs/1.3.0/aws/index.html                        | 47 ++++++++--------
 docs/1.3.0/branching/index.html                  |  4 +-
 docs/1.3.0/configuration/index.html              | 11 ++--
 docs/1.3.0/custom-catalog/index.html             |  2 +-
 docs/1.3.0/dell/index.html                       |  2 +-
 docs/1.3.0/delta-lake-migration/index.html       |  2 +-
 docs/1.3.0/evolution/index.html                  |  2 +-
 docs/1.3.0/flink-actions/index.html              |  2 +-
 docs/1.3.0/flink-configuration/index.html        |  4 +-
 docs/1.3.0/flink-connector/index.html            |  2 +-
 docs/1.3.0/flink-ddl/index.html                  |  2 +-
 docs/1.3.0/flink-queries/index.html              |  2 +-
 docs/1.3.0/flink-writes/index.html               |  2 +-
 docs/1.3.0/flink/index.html                      |  2 +-
 docs/1.3.0/getting-started/index.html            |  2 +-
 docs/1.3.0/hive-migration/index.html             |  2 +-
 docs/1.3.0/hive/index.html                       |  2 +-
 docs/1.3.0/index.html                            |  2 +-
 docs/1.3.0/index.xml                             |  5 +-
 docs/1.3.0/java-api-quickstart/index.html        |  2 +-
 docs/1.3.0/jdbc/index.html                       |  2 +-
 docs/1.3.0/maintenance/index.html                |  2 +-
 docs/1.3.0/nessie/index.html                     |  2 +-
 docs/1.3.0/partitioning/index.html               |  2 +-
 docs/1.3.0/performance/index.html                |  2 +-
 docs/1.3.0/reliability/index.html                |  2 +-
 docs/1.3.0/schemas/index.html                    |  2 +-
 docs/1.3.0/spark-configuration/index.html        |  9 ++--
 docs/1.3.0/spark-ddl/index.html                  |  2 +-
 docs/1.3.0/spark-procedures/index.html           | 24 +++++----
 docs/1.3.0/spark-queries/index.html              |  2 +-
 docs/1.3.0/spark-structured-streaming/index.html |  2 +-
 docs/1.3.0/spark-writes/index.html               | 68 +++++++++++++-----------
 docs/1.3.0/table-migration/index.html            |  2 +-
 35 files changed, 120 insertions(+), 106 deletions(-)

diff --git a/docs/1.3.0/api/index.html b/docs/1.3.0/api/index.html
index f19b00d6..a3e28b84 100644
--- a/docs/1.3.0/api/index.html
+++ b/docs/1.3.0/api/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/aws/index.html b/docs/1.3.0/aws/index.html
index 0ea7b175..24c6083b 100644
--- a/docs/1.3.0/aws/index.html
+++ b/docs/1.3.0/aws/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
@@ -30,7 +30,7 @@ Here are some examples.</p><h3 id=spark>Spark</h3><p>For example, to use AWS fea
 </span></span><span style=display:flex><span>ICEBERG_VERSION<span style=color:#f92672>=</span>1.2.1
 </span></span><span style=display:flex><span>DEPENDENCIES<span style=color:#f92672>=</span><span style=color:#e6db74>&#34;org.apache.iceberg:iceberg-spark-runtime-3.3_2.12:</span>$ICEBERG_VERSION<span style=color:#e6db74>&#34;</span>
 </span></span><span style=display:flex><span>
-</span></span><span style=display:flex><span><span style=color:#75715e># add AWS dependnecy</span>
+</span></span><span style=display:flex><span><span style=color:#75715e># add AWS dependency</span>
 </span></span><span style=display:flex><span>AWS_SDK_VERSION<span style=color:#f92672>=</span>2.20.18
 </span></span><span style=display:flex><span>AWS_MAVEN_GROUP<span style=color:#f92672>=</span>software.amazon.awssdk
 </span></span><span style=display:flex><span>AWS_PACKAGES<span style=color:#f92672>=(</span>
@@ -53,7 +53,7 @@ Here are some examples.</p><h3 id=spark>Spark</h3><p>For example, to use AWS fea
 </span></span><span style=display:flex><span>ICEBERG_MAVEN_URL<span style=color:#f92672>=</span>$MAVEN_URL/org/apache/iceberg
 </span></span><span style=display:flex><span>wget $ICEBERG_MAVEN_URL/iceberg-flink-runtime/$ICEBERG_VERSION/iceberg-flink-runtime-$ICEBERG_VERSION.jar
 </span></span><span style=display:flex><span>
-</span></span><span style=display:flex><span><span style=color:#75715e># download AWS dependnecy</span>
+</span></span><span style=display:flex><span><span style=color:#75715e># download AWS dependency</span>
 </span></span><span style=display:flex><span>AWS_SDK_VERSION<span style=color:#f92672>=</span>2.20.18
 </span></span><span style=display:flex><span>AWS_MAVEN_URL<span style=color:#f92672>=</span>$MAVEN_URL/software/amazon/awssdk
 </span></span><span style=display:flex><span>AWS_PACKAGES<span style=color:#f92672>=(</span>
@@ -102,8 +102,8 @@ More details about loading the catalog can be found in individual engine pages,
 By default, <code>GlueCatalog</code> chooses the Glue metastore to use based on the user&rsquo;s default AWS client credential and region setup.
 You can specify the Glue catalog ID through <code>glue.id</code> catalog property to point to a Glue catalog in a different AWS account.
 The Glue catalog ID is your numeric AWS account ID.
-If the Glue catalog is in a different region, you should configure you AWS client to point to the correct region,
-see more details in <a href=#aws-client-customization>AWS client customization</a>.</p><h4 id=skip-archive>Skip Archive</h4><p>AWS Glue has the ability to archive older table versions and a user can rollback the table to any historical version if needed.
+If the Glue catalog is in a different region, you should configure your AWS client to point to the correct region,
+see more details in <a href=#aws-client-customization>AWS client customization</a>.</p><h4 id=skip-archive>Skip Archive</h4><p>AWS Glue has the ability to archive older table versions and a user can roll back the table to any historical version if needed.
 By default, the Iceberg Glue Catalog will skip the archival of older table versions.
 If a user wishes to archive older table versions, they can set <code>glue.skip-archive</code> to false.
 Do note for streaming ingestion into Iceberg tables, setting <code>glue.skip-archive</code> to false will quickly create a lot of Glue table versions.
@@ -115,9 +115,8 @@ and table name validation are skipped, there is no guarantee that downstream sys
 With optimistic locking, each table has a version id.
 If users retrieve the table metadata, Iceberg records the version id of that table.
 Users can update the table as long as the version ID on the server side remains unchanged.
-If there is a version mismatch, it means that someone else has modified the table before you did.
-The update attempt fails, because you have a stale version of the table.
-If this happens, Iceberg refreshes the metadata and checks if there might be potential conflict.
+Version mismatch occurs if someone else modified the table before you did, causing an update failure.
+Iceberg then refreshes metadata and checks if there is a conflict.
 If there is no commit conflict, the operation will be retried.
 Optimistic locking guarantees atomic transaction of Iceberg tables in Glue.
 It also prevents others from accidentally overwriting your changes.</p><div class=info>Please use AWS SDK version >= 2.17.131 to leverage Glue&rsquo;s Optimistic Locking.
@@ -136,17 +135,17 @@ For example, in Spark SQL you can do:</p><div class=highlight><pre tabindex=0 st
 </span></span><span style=display:flex><span><span style=color:#66d9ef>USING</span> iceberg
 </span></span><span style=display:flex><span><span style=color:#66d9ef>OPTIONS</span> (<span style=color:#e6db74>&#39;location&#39;</span><span style=color:#f92672>=</span><span style=color:#e6db74>&#39;s3://my-special-table-bucket&#39;</span>)
 </span></span><span style=display:flex><span>PARTITIONED <span style=color:#66d9ef>BY</span> (category);
-</span></span></code></pre></div><p>For engines like Spark that supports the <code>LOCATION</code> keyword, the above SQL statement is equivalent to:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CREATE</span> <span style=color:#66d9ef>TABLE</span> my_catalog.my_ns.my_table (
+</span></span></code></pre></div><p>For engines like Spark that support the <code>LOCATION</code> keyword, the above SQL statement is equivalent to:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CREATE</span> <span style=color:#66d9ef>TABLE</span> my_catalog.my_ns.my_table (
 </span></span><span style=display:flex><span>    id bigint,
 </span></span><span style=display:flex><span>    <span style=color:#66d9ef>data</span> string,
 </span></span><span style=display:flex><span>    category string)
 </span></span><span style=display:flex><span><span style=color:#66d9ef>USING</span> iceberg
 </span></span><span style=display:flex><span><span style=color:#66d9ef>LOCATION</span> <span style=color:#e6db74>&#39;s3://my-special-table-bucket&#39;</span>
 </span></span><span style=display:flex><span>PARTITIONED <span style=color:#66d9ef>BY</span> (category);
-</span></span></code></pre></div><h3 id=dynamodb-catalog>DynamoDB Catalog</h3><p>Iceberg supports using a <a href=https://aws.amazon.com/dynamodb>DynamoDB</a> table to record and manage database and table information.</p><h4 id=configurations>Configurations</h4><p>The DynamoDB catalog supports the following configurations:</p><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>dynamodb.table-name</td><td>iceberg</td><td>name of the DynamoDB  [...]
-You can configure to use JDBC catalog with relational database services like <a href=https://aws.amazon.com/rds>AWS RDS</a>.
+</span></span></code></pre></div><h3 id=dynamodb-catalog>DynamoDB Catalog</h3><p>Iceberg supports using a <a href=https://aws.amazon.com/dynamodb>DynamoDB</a> table to record and manage database and table information.</p><h4 id=configurations>Configurations</h4><p>The DynamoDB catalog supports the following configurations:</p><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>dynamodb.table-name</td><td>iceberg</td><td>name of the DynamoDB  [...]
+You can configure to use the JDBC catalog with relational database services like <a href=https://aws.amazon.com/rds>AWS RDS</a>.
 Read <a href=../jdbc/#jdbc-catalog>the JDBC integration page</a> for guides and examples about using the JDBC catalog.
-Read <a href=https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.Java.html>this AWS documentation</a> for more details about configuring JDBC catalog with IAM authentication.</p><h3 id=which-catalog-to-choose>Which catalog to choose?</h3><p>With all the available options, we offer the following guidance when choosing the right catalog to use for your application:</p><ol><li>if your organization has an existing Glue metastore or plans to use the AWS an [...]
+Read <a href=https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.Java.html>this AWS documentation</a> for more details about configuring the JDBC catalog with IAM authentication.</p><h3 id=which-catalog-to-choose>Which catalog to choose?</h3><p>With all the available options, we offer the following guidelines when choosing the right catalog to use for your application:</p><ol><li>if your organization has an existing Glue metastore or plans to use the  [...]
 the catalog first obtains a lock using a helper DynamoDB table and then try to safely modify the Iceberg table.
 This is necessary for a file system-based catalog to ensure atomic transaction in storages like S3 that do not provide file write mutual exclusion.</p><p>This feature requires the following lock related catalog properties:</p><ol><li>Set <code>lock-impl</code> as <code>org.apache.iceberg.aws.dynamodb.DynamoDbLockManager</code>.</li><li>Set <code>lock.table</code> as the DynamoDB table name you would like to use. If the lock table with the given name does not exist in DynamoDB, a new tabl [...]
 For more details, please refer to <a href=../configuration/#lock-catalog-properties>Lock catalog properties</a>.</p><h2 id=s3-fileio>S3 FileIO</h2><p>Iceberg allows users to write data to S3 through <code>S3FileIO</code>.
@@ -154,12 +153,12 @@ For more details, please refer to <a href=../configuration/#lock-catalog-propert
 Data files are uploaded by parts in parallel as soon as each part is ready,
 and each file part is deleted as soon as its upload process completes.
 This provides maximized upload speed and minimized local disk usage during uploads.
-Here are the configurations that users can tune related to this feature:</p><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>s3.multipart.num-threads</td><td>the available number of processors in the system</td><td>number of threads to use for uploading parts to S3 (shared across all output streams)</td></tr><tr><td>s3.multipart.part-size-bytes</td><td>32MB</td><td>the size of a single part for multipart upload requests</td></tr><tr><td>s [...]
+Here are the configurations that users can tune related to this feature:</p><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>s3.multipart.num-threads</td><td>the available number of processors in the system</td><td>number of threads to use for uploading parts to S3 (shared across all output streams)</td></tr><tr><td>s3.multipart.part-size-bytes</td><td>32MB</td><td>the size of a single part for multipart upload requests</td></tr><tr><td>s [...]
 User can choose the ACL level by setting the <code>s3.acl</code> property.
 For more details, please read <a href=https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html>S3 ACL Documentation</a>.</p><h3 id=object-store-file-layout>Object Store File Layout</h3><p>S3 and many other cloud storage services <a href=https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/>throttle requests based on object prefix</a>.
-Data stored in S3 with a traditional Hive storage layout can face S3 request throttling as objects are stored under the same filepath prefix.</p><p>Iceberg by default uses the Hive storage layout, but can be switched to use the <code>ObjectStoreLocationProvider</code>.
-With <code>ObjectStoreLocationProvider</code>, a determenistic hash is generated for each stored file, with the hash appended
-directly after the <code>write.data.path</code>. This ensures files written to s3 are equally distributed across multiple <a href=https://aws.amazon.com/premiumsupport/knowledge-center/s3-object-key-naming-pattern/>prefixes</a> in the S3 bucket. Resulting in minimized throttling and maximized throughput for S3-related IO operations. When using <code>ObjectStoreLocationProvider</code> having a shared and short <code>write.data.path</code> across your Iceberg tables will improve performanc [...]
+Data stored in S3 with a traditional Hive storage layout can face S3 request throttling as objects are stored under the same file path prefix.</p><p>Iceberg by default uses the Hive storage layout but can be switched to use the <code>ObjectStoreLocationProvider</code>.
+With <code>ObjectStoreLocationProvider</code>, a deterministic hash is generated for each stored file, with the hash appended
+directly after the <code>write.data.path</code>. This ensures files written to s3 are equally distributed across multiple <a href=https://aws.amazon.com/premiumsupport/knowledge-center/s3-object-key-naming-pattern/>prefixes</a> in the S3 bucket. Resulting in minimized throttling and maximized throughput for S3-related IO operations. When using <code>ObjectStoreLocationProvider</code> having a shared and short <code>write.data.path</code> across your Iceberg tables will improve performanc [...]
 Below is an example Spark SQL command to create a table using the <code>ObjectStorageLocationProvider</code>:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CREATE</span> <span style=color:#66d9ef>TABLE</span> my_catalog.my_ns.my_table (
 </span></span><span style=display:flex><span>    id bigint,
 </span></span><span style=display:flex><span>    <span style=color:#66d9ef>data</span> string,
@@ -174,8 +173,8 @@ Below is an example Spark SQL command to create a table using the <code>ObjectSt
 </code></pre><p>Note, the path resolution logic for <code>ObjectStoreLocationProvider</code> is <code>write.data.path</code> then <code>&lt;tableLocation>/data</code>.
 However, for the older versions up to 0.12.0, the logic is as follows:</p><ul><li>before 0.12.0, <code>write.object-storage.path</code> must be set.</li><li>at 0.12.0, <code>write.object-storage.path</code> then <code>write.folder-storage.path</code> then <code>&lt;tableLocation>/data</code>.</li></ul><p>For more details, please refer to the <a href=../custom-catalog/#custom-location-provider-implementation>LocationProvider Configuration</a> section.</p><h3 id=s3-strong-consistency>S3 St [...]
 There is no redundant consistency wait and check which might negatively impact performance during IO operations.</p><h3 id=hadoop-s3a-filesystem>Hadoop S3A FileSystem</h3><p>Before <code>S3FileIO</code> was introduced, many Iceberg users choose to use <code>HadoopFileIO</code> to write data to S3 through the <a href=https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java>S3A FileSystem</a>.
-As introduced in the previous sections, <code>S3FileIO</code> adopts latest AWS clients and S3 features for optimized security and performance,
-and is thus recommend for S3 use cases rather than the S3A FileSystem.</p><p><code>S3FileIO</code> writes data with <code>s3://</code> URI scheme, but it is also compatible with schemes written by the S3A FileSystem.
+As introduced in the previous sections, <code>S3FileIO</code> adopts the latest AWS clients and S3 features for optimized security and performance
+and is thus recommended for S3 use cases rather than the S3A FileSystem.</p><p><code>S3FileIO</code> writes data with <code>s3://</code> URI scheme, but it is also compatible with schemes written by the S3A FileSystem.
 This means for any table manifests containing <code>s3a://</code> or <code>s3n://</code> file paths, <code>S3FileIO</code> is still able to read them.
 This feature allows people to easily switch from S3A to <code>S3FileIO</code>.</p><p>If for any reason you have to use S3A, here are the instructions:</p><ol><li>To store data using S3A, specify the <code>warehouse</code> catalog property to be an S3A path, e.g. <code>s3a://my-bucket/my-warehouse</code></li><li>For <code>HiveCatalog</code>, to also store metadata using S3A, specify the Hadoop config property <code>hive.metastore.warehouse.dir</code> to be an S3A path.</li><li>Add <a href [...]
 This is turned off by default.</p><h3 id=s3-tags>S3 Tags</h3><p>Custom <a href=https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-tagging.html>tags</a> can be added to S3 objects while writing and deleting.
@@ -221,7 +220,7 @@ access-point for all S3 operations.</p><p>For more details on using access-point
     --conf spark.sql.catalog.my_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO \
     --conf spark.sql.catalog.my_catalog.s3.acceleration-enabled=true
 </code></pre><p>For more details on using S3 Acceleration, please refer to <a href=https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration.html>Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration</a>.</p><h3 id=s3-dual-stack>S3 Dual-stack</h3><p><a href=https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html>S3 Dual-stack</a> allows a client to access an S3 bucket through a dual-stack endpoint.
-When clients make a request to a dual-stack endpoint, the bucket URL resolves to an IPv6 address if possible, otherwise fallback to IPv4.</p><p>To use S3 Dual-stack, we need to set <code>s3.dualstack-enabled</code> catalog property to <code>true</code> to enable <code>S3FileIO</code> to make dual-stack S3 calls.</p><p>For example, to use S3 Dual-stack with Spark 3.3, you can start the Spark SQL shell with:</p><pre tabindex=0><code>spark-sql --conf spark.sql.catalog.my_catalog=org.apache. [...]
+When clients request a dual-stack endpoint, the bucket URL resolves to an IPv6 address if possible, otherwise fallback to IPv4.</p><p>To use S3 Dual-stack, we need to set <code>s3.dualstack-enabled</code> catalog property to <code>true</code> to enable <code>S3FileIO</code> to make dual-stack S3 calls.</p><p>For example, to use S3 Dual-stack with Spark 3.3, you can start the Spark SQL shell with:</p><pre tabindex=0><code>spark-sql --conf spark.sql.catalog.my_catalog=org.apache.iceberg.sp [...]
     --conf spark.sql.catalog.my_catalog.warehouse=s3://my-bucket2/my/key/prefix \
     --conf spark.sql.catalog.my_catalog.catalog-impl=org.apache.iceberg.aws.glue.GlueCatalog \
     --conf spark.sql.catalog.my_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO \
@@ -230,7 +229,7 @@ When clients make a request to a dual-stack endpoint, the bucket URL resolves to
 Iceberg allows users to plug in their own implementation of <code>org.apache.iceberg.aws.AwsClientFactory</code> by setting the <code>client.factory</code> catalog property.</p><h3 id=cross-account-and-cross-region-access>Cross-Account and Cross-Region Access</h3><p>It is a common use case for organizations to have a centralized AWS account for Glue metastore and S3 buckets, and use different AWS accounts and regions for different teams to access those resources.
 In this case, a <a href=https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html>cross-account IAM role</a> is needed to access those centralized resources.
 Iceberg provides an AWS client factory <code>AssumeRoleAwsClientFactory</code> to support this common use case.
-This also serves as an example for users who would like to implement their own AWS client factory.</p><p>This client factory has the following configurable catalog properties:</p><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>client.assume-role.arn</td><td>null, requires user input</td><td>ARN of the role to assume, e.g. arn:aws:iam::123456789:role/myRoleToAssume</td></tr><tr><td>client.assume-role.region</td><td>null, requires user inp [...]
+This also serves as an example for users who would like to implement their own AWS client factory.</p><p>This client factory has the following configurable catalog properties:</p><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>client.assume-role.arn</td><td>null, requires user input</td><td>ARN of the role to assume, e.g. arn:aws:iam::123456789:role/myRoleToAssume</td></tr><tr><td>client.assume-role.region</td><td>null, requires user inp [...]
 The Glue, S3 and DynamoDB clients are then initialized with the assume-role credential and region to access resources.
 Here is an example to start Spark shell with this client factory:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-shell data-lang=shell><span style=display:flex><span>spark-sql --packages org.apache.iceberg:iceberg-spark-runtime:1.2.1,software.amazon.awssdk:bundle:2.20.18 <span style=color:#ae81ff>\
 </span></span></span><span style=display:flex><span><span style=color:#ae81ff></span>    --conf spark.sql.catalog.my_catalog<span style=color:#f92672>=</span>org.apache.iceberg.spark.SparkCatalog <span style=color:#ae81ff>\
@@ -242,9 +241,9 @@ Here is an example to start Spark shell with this client factory:</p><div class=
 </span></span></code></pre></div><h3 id=http-client-configurations>HTTP Client Configurations</h3><p>AWS clients support two types of HTTP Client, <a href=https://mvnrepository.com/artifact/software.amazon.awssdk/url-connection-client>URL Connection HTTP Client</a>
 and <a href=https://mvnrepository.com/artifact/software.amazon.awssdk/apache-client>Apache HTTP Client</a>.
 By default, AWS clients use <strong>URL Connection</strong> HTTP Client to communicate with the service.
-This HTTP client optimizes for minimum dependencies and startup latency but support less functionality than other implementations.
-In contrast, Apache HTTP Client supports more functionalities and more customized settings, such as expect-continue handshake and TCP KeepAlive, at cost of extra dependency and additional startup latency.</p><p>For more details of configuration, see sections <a href=#url-connection-http-client-configurations>URL Connection HTTP Client Configurations</a> and <a href=#apache-http-client-configurations>Apache HTTP Client Configurations</a>.</p><p>Configure the following property to set the  [...]
-</span></span></code></pre></div><h4 id=apache-http-client-configurations>Apache HTTP Client Configurations</h4><p>Apache HTTP Client has the following configurable properties:</p><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>http-client.apache.socket-timeout-ms</td><td>null</td><td>An optional <a href=https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/apache/ApacheHttpClient.Builder.html#socketTimeout(java.time.Dura [...]
+This HTTP client optimizes for minimum dependencies and startup latency but supports less functionality than other implementations.
+In contrast, Apache HTTP Client supports more functionalities and more customized settings, such as expect-continue handshake and TCP KeepAlive, at the cost of extra dependency and additional startup latency.</p><p>For more details of configuration, see sections <a href=#url-connection-http-client-configurations>URL Connection HTTP Client Configurations</a> and <a href=#apache-http-client-configurations>Apache HTTP Client Configurations</a>.</p><p>Configure the following property to set  [...]
+</span></span></code></pre></div><h4 id=apache-http-client-configurations>Apache HTTP Client Configurations</h4><p>Apache HTTP Client has the following configurable properties:</p><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>http-client.apache.socket-timeout-ms</td><td>null</td><td>An optional <a href=https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/apache/ApacheHttpClient.Builder.html#socketTimeout(java.time.Dura [...]
 </span></span></code></pre></div><h2 id=run-iceberg-on-aws>Run Iceberg on AWS</h2><h3 id=amazon-athena>Amazon Athena</h3><p><a href=https://aws.amazon.com/athena/>Amazon Athena</a> provides a serverless query engine that could be used to perform read, write, update and optimization tasks against Iceberg tables.
 More details could be found <a href=https://docs.aws.amazon.com/athena/latest/ug/querying-iceberg.html>here</a>.</p><h3 id=amazon-emr>Amazon EMR</h3><p><a href=https://aws.amazon.com/emr/>Amazon EMR</a> can provision clusters with <a href=https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark.html>Spark</a> (EMR 6 for Spark 3, EMR 5 for Spark 2),
 <a href=https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hive.html>Hive</a>, <a href=https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-flink.html>Flink</a>,
@@ -283,7 +282,7 @@ Please refer to the <a href=https://docs.aws.amazon.com/emr/latest/ReleaseGuide/
 </span></span><span style=display:flex><span>install_dependencies $LIB_PATH $ICEBERG_MAVEN_URL $ICEBERG_VERSION <span style=color:#e6db74>&#34;</span><span style=color:#e6db74>${</span>ICEBERG_PACKAGES[@]<span style=color:#e6db74>}</span><span style=color:#e6db74>&#34;</span>
 </span></span><span style=display:flex><span>install_dependencies $LIB_PATH $AWS_MAVEN_URL $AWS_SDK_VERSION <span style=color:#e6db74>&#34;</span><span style=color:#e6db74>${</span>AWS_PACKAGES[@]<span style=color:#e6db74>}</span><span style=color:#e6db74>&#34;</span>
 </span></span></code></pre></div><h3 id=aws-glue>AWS Glue</h3><p><a href=https://aws.amazon.com/glue/>AWS Glue</a> provides a serverless data integration service
-that could be used to perform read, write, update tasks against Iceberg tables.
+that could be used to perform read, write and update tasks against Iceberg tables.
 More details could be found <a href=https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-iceberg.html>here</a>.</p><h3 id=aws-eks>AWS EKS</h3><p><a href=https://aws.amazon.com/eks/>AWS Elastic Kubernetes Service (EKS)</a> can be used to start any Spark, Flink, Hive, Presto or Trino clusters to work with Iceberg.
 Search the <a href=../../../blogs>Iceberg blogs</a> page for tutorials around running Iceberg with Docker and Kubernetes.</p><h3 id=amazon-kinesis>Amazon Kinesis</h3><p><a href=https://aws.amazon.com/about-aws/whats-new/2019/11/you-can-now-run-fully-managed-apache-flink-applications-with-apache-kafka/>Amazon Kinesis Data Analytics</a> provides a platform
 to run fully managed Apache Flink applications. You can include Iceberg in your application Jar and run it in the platform.</p></div><div id=toc class=markdown-body><div id=full><nav id=TableOfContents><ul><li><a href=#enabling-aws-integration>Enabling AWS Integration</a><ul><li><a href=#spark>Spark</a></li><li><a href=#flink>Flink</a></li><li><a href=#hive>Hive</a></li></ul></li><li><a href=#catalogs>Catalogs</a><ul><li><a href=#glue-catalog>Glue Catalog</a></li><li><a href=#dynamodb-ca [...]
diff --git a/docs/1.3.0/branching/index.html b/docs/1.3.0/branching/index.html
index ff24a240..303e1e1b 100644
--- a/docs/1.3.0/branching/index.html
+++ b/docs/1.3.0/branching/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class="collapse in"><ul class=sub-menu><li><a id=active href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
@@ -15,7 +15,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=API class=collapse><ul class=sub-menu><li><a href=../java-api-quickstart/>Java Quickstart</a></li><li><a href=../api/>Java API</a></li><li><a href=../custom-catalog/>Java Custom Catalog</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href=#Migration><span>Migration</span>
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Migration class=collapse><ul class=sub-menu><li><a href=../table-migration/>Overview</a></li><li><a href=../hive-migration/>Hive Migration</a></li><li><a href=../delta-lake-migration/>Delta Lake Migration</a></li></ul></div><li><a href=https://iceberg.apache.org/docs/1.3.0/../../javadoc/latest><span>Javadoc</span></a></li><li><a target=_blank href=https://py.iceberg.apache.org/><span>PyIceberg</span></a></li></div></div><div id=content c [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Migration class=collapse><ul class=sub-menu><li><a href=../table-migration/>Overview</a></li><li><a href=../hive-migration/>Hive Migration</a></li><li><a href=../delta-lake-migration/>Delta Lake Migration</a></li></ul></div><li><a href=https://iceberg.apache.org/docs/1.3.0/../../javadoc/latest><span>Javadoc</span></a></li><li><a target=_blank href=https://py.iceberg.apache.org/><span>PyIceberg</span></a></li></div></div><div id=content c [...]
 Snapshots are fundamental in Iceberg as they are the basis for reader isolation and time travel queries.
 For controlling metadata size and storage costs, Iceberg provides snapshot lifecycle management procedures such as <a href=../../spark/spark-procedures/#expire-snapshots><code>expire_snapshots</code></a> for removing unused snapshots and no longer neccessary data files based on table snapshot retention properties.</p><p><strong>For more sophisticated snapshot lifecycle management, Iceberg supports branches and tags which are named references to snapshots with their own independent lifecy [...]
 Branches are independent lineages of snapshots and point to the head of the lineage.
diff --git a/docs/1.3.0/configuration/index.html b/docs/1.3.0/configuration/index.html
index dda805c0..3381a915 100644
--- a/docs/1.3.0/configuration/index.html
+++ b/docs/1.3.0/configuration/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class="collapse in"><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a id=active href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
@@ -15,15 +15,18 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=API class=collapse><ul class=sub-menu><li><a href=../java-api-quickstart/>Java Quickstart</a></li><li><a href=../api/>Java API</a></li><li><a href=../custom-catalog/>Java Custom Catalog</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href=#Migration><span>Migration</span>
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Migration class=collapse><ul class=sub-menu><li><a href=../table-migration/>Overview</a></li><li><a href=../hive-migration/>Hive Migration</a></li><li><a href=../delta-lake-migration/>Delta Lake Migration</a></li></ul></div><li><a href=https://iceberg.apache.org/docs/1.3.0/../../javadoc/latest><span>Javadoc</span></a></li><li><a target=_blank href=https://py.iceberg.apache.org/><span>PyIceberg</span></a></li></div></div><div id=content c [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Migration class=collapse><ul class=sub-menu><li><a href=../table-migration/>Overview</a></li><li><a href=../hive-migration/>Hive Migration</a></li><li><a href=../delta-lake-migration/>Delta Lake Migration</a></li></ul></div><li><a href=https://iceberg.apache.org/docs/1.3.0/../../javadoc/latest><span>Javadoc</span></a></li><li><a target=_blank href=https://py.iceberg.apache.org/><span>PyIceberg</span></a></li></div></div><div id=content c [...]
 The value of these properties are not persisted as a part of the table metadata.</p><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>format-version</td><td>1</td><td>Table&rsquo;s format version (can be 1 or 2) as defined in the <a href=../../../spec/#format-versioning>Spec</a>.</td></tr></tbody></table><h3 id=compatibility-flags>Compatibility flags</h3><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><t [...]
 Any other custom catalog can access the properties by implementing <code>Catalog.initialize(catalogName, catalogProperties)</code>.
 The properties can be manually constructed or passed in from a compute engine like Spark or Flink.
 Spark uses its session properties as catalog properties, see more details in the <a href=../spark-configuration#catalog-configuration>Spark configuration</a> section.
 Flink passes in catalog properties through <code>CREATE CATALOG</code> statement, see more details in the <a href=../flink/#creating-catalogs-and-using-catalogs>Flink</a> section.</p><h3 id=lock-catalog-properties>Lock catalog properties</h3><p>Here are the catalog properties related to locking. They are used by some catalog implementations to control the locking behavior during commits.</p><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td [...]
-The HMS table locking is a 2-step process:</p><ol><li>Lock Creation: Create lock in HMS and queue for acquisition</li><li>Lock Check: Check if lock successfully acquired</li></ol><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>iceberg.hive.client-pool-size</td><td>5</td><td>The size of the Hive client pool when tracking tables in HMS</td></tr><tr><td>iceberg.hive.lock-creation-timeout-ms</td><td>180000 (3 min)</td><td>Maximum time in mil [...]
+The HMS table locking is a 2-step process:</p><ol><li>Lock Creation: Create lock in HMS and queue for acquisition</li><li>Lock Check: Check if lock successfully acquired</li></ol><table><thead><tr><th>Property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>iceberg.hive.client-pool-size</td><td>5</td><td>The size of the Hive client pool when tracking tables in HMS</td></tr><tr><td>iceberg.hive.lock-creation-timeout-ms</td><td>180000 (3 min)</td><td>Maximum time in mil [...]
 of the Hive Metastore (<code>hive.txn.timeout</code> or <code>metastore.txn.timeout</code> in the newer versions). Otherwise, the heartbeats on the lock (which happens during the lock checks) would end up expiring in the
-Hive Metastore before the lock is retried from Iceberg.</p></div><div id=toc class=markdown-body><div id=full><nav id=TableOfContents><ul><li><a href=#table-properties>Table properties</a><ul><li><a href=#read-properties>Read properties</a></li><li><a href=#write-properties>Write properties</a></li><li><a href=#table-behavior-properties>Table behavior properties</a></li><li><a href=#reserved-table-properties>Reserved table properties</a></li><li><a href=#compatibility-flags>Compatibility [...]
+Hive Metastore before the lock is retried from Iceberg.</p><p>Warn: Setting <code>iceberg.engine.hive.lock-enabled</code>=<code>false</code> will cause HiveCatalog to commit to tables without using Hive locks.
+This should only be set to <code>false</code> if all following conditions are met:</p><ul><li><a href=https://issues.apache.org/jira/browse/HIVE-26882>HIVE-26882</a>
+is available on the Hive Metastore server</li><li>All other HiveCatalogs committing to tables that this HiveCatalog commits to are also on Iceberg 1.3 or later</li><li>All other HiveCatalogs committing to tables that this HiveCatalog commits to have also disabled Hive locks on commit.</li></ul><p><strong>Failing to ensure these conditions risks corrupting the table.</strong></p><p>Even with <code>iceberg.engine.hive.lock-enabled</code> set to <code>false</code>, a HiveCatalog can still u [...]
+This is useful in the case where other HiveCatalogs cannot be upgraded and set to commit without using Hive locks.</p></div><div id=toc class=markdown-body><div id=full><nav id=TableOfContents><ul><li><a href=#table-properties>Table properties</a><ul><li><a href=#read-properties>Read properties</a></li><li><a href=#write-properties>Write properties</a></li><li><a href=#table-behavior-properties>Table behavior properties</a></li><li><a href=#reserved-table-properties>Reserved table proper [...]
 <script src=https://iceberg.apache.org/docs/1.3.0//js/jquery.easing.min.js></script>
 <script type=text/javascript src=https://iceberg.apache.org/docs/1.3.0//js/search.js></script>
 <script src=https://iceberg.apache.org/docs/1.3.0//js/bootstrap.min.js></script>
diff --git a/docs/1.3.0/custom-catalog/index.html b/docs/1.3.0/custom-catalog/index.html
index f7aab849..33fbcd55 100644
--- a/docs/1.3.0/custom-catalog/index.html
+++ b/docs/1.3.0/custom-catalog/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/dell/index.html b/docs/1.3.0/dell/index.html
index c291b04e..420fd16c 100644
--- a/docs/1.3.0/dell/index.html
+++ b/docs/1.3.0/dell/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/delta-lake-migration/index.html b/docs/1.3.0/delta-lake-migration/index.html
index 07c0025b..428e3275 100644
--- a/docs/1.3.0/delta-lake-migration/index.html
+++ b/docs/1.3.0/delta-lake-migration/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/evolution/index.html b/docs/1.3.0/evolution/index.html
index 97e01ae0..41085599 100644
--- a/docs/1.3.0/evolution/index.html
+++ b/docs/1.3.0/evolution/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class="collapse in"><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a id=active href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/flink-actions/index.html b/docs/1.3.0/flink-actions/index.html
index f9860ddd..67b8bfd1 100644
--- a/docs/1.3.0/flink-actions/index.html
+++ b/docs/1.3.0/flink-actions/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-toggle data-toggle=collapse data-parent=full href=#Flink><spa [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-tog [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class="collapse in"><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a id=active href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li>< [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/flink-configuration/index.html b/docs/1.3.0/flink-configuration/index.html
index 73285f80..ccf5d727 100644
--- a/docs/1.3.0/flink-configuration/index.html
+++ b/docs/1.3.0/flink-configuration/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-toggle data-toggle=collapse data-parent=full href=#Flink><spa [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-tog [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class="collapse in"><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a id=active href=../flink-configuration/>Flink Configuration</a></li></ul></div><li>< [...]
 <i class="fa fa-chevron-right"></i>
@@ -34,7 +34,7 @@
     .getConfiguration()
     .set(FlinkReadOptions.SPLIT_FILE_OPEN_COST_OPTION, 1000L);
 ...
-</code></pre><p><code>Read option</code> has the highest priority, followed by <code>Flink configuration</code> and then <code>Table property</code>.</p><table><thead><tr><th>Read option</th><th>Flink configuration</th><th>Table property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>snapshot-id</td><td>N/A</td><td>N/A</td><td>null</td><td>For time travel in batch mode. Read data from the specified snapshot-id.</td></tr><tr><td>case-sensitive</td><td>connector.iceber [...]
+</code></pre><p><code>Read option</code> has the highest priority, followed by <code>Flink configuration</code> and then <code>Table property</code>.</p><table><thead><tr><th>Read option</th><th>Flink configuration</th><th>Table property</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>snapshot-id</td><td>N/A</td><td>N/A</td><td>null</td><td>For time travel in batch mode. Read data from the specified snapshot-id.</td></tr><tr><td>case-sensitive</td><td>connector.iceber [...]
     .table(table)
     .tableLoader(tableLoader)
     .set(&#34;write-format&#34;, &#34;orc&#34;)
diff --git a/docs/1.3.0/flink-connector/index.html b/docs/1.3.0/flink-connector/index.html
index d4e78689..2b601f25 100644
--- a/docs/1.3.0/flink-connector/index.html
+++ b/docs/1.3.0/flink-connector/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-toggle data-toggle=collapse data-parent=full href=#Flink><spa [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-tog [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class="collapse in"><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a id=active href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li>< [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/flink-ddl/index.html b/docs/1.3.0/flink-ddl/index.html
index b6ef2b6e..0274770b 100644
--- a/docs/1.3.0/flink-ddl/index.html
+++ b/docs/1.3.0/flink-ddl/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-toggle data-toggle=collapse data-parent=full href=#Flink><spa [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-tog [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class="collapse in"><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a id=active href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li>< [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/flink-queries/index.html b/docs/1.3.0/flink-queries/index.html
index 2212f81b..6652e5cf 100644
--- a/docs/1.3.0/flink-queries/index.html
+++ b/docs/1.3.0/flink-queries/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-toggle data-toggle=collapse data-parent=full href=#Flink><spa [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-tog [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class="collapse in"><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a id=active href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li>< [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/flink-writes/index.html b/docs/1.3.0/flink-writes/index.html
index 14a597dd..6f83f6b0 100644
--- a/docs/1.3.0/flink-writes/index.html
+++ b/docs/1.3.0/flink-writes/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-toggle data-toggle=collapse data-parent=full href=#Flink><spa [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-tog [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class="collapse in"><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a id=active href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li>< [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/flink/index.html b/docs/1.3.0/flink/index.html
index 9f17b3ba..667f3e81 100644
--- a/docs/1.3.0/flink/index.html
+++ b/docs/1.3.0/flink/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-toggle data-toggle=collapse data-parent=full href=#Flink><spa [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class=chevron-tog [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class="collapse in"><ul class=sub-menu><li><a id=active href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li>< [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/getting-started/index.html b/docs/1.3.0/getting-started/index.html
index 0c87946d..6811a258 100644
--- a/docs/1.3.0/getting-started/index.html
+++ b/docs/1.3.0/getting-started/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a id=active href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-p [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a id=active href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a cl [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/hive-migration/index.html b/docs/1.3.0/hive-migration/index.html
index 585d7791..6710cbda 100644
--- a/docs/1.3.0/hive-migration/index.html
+++ b/docs/1.3.0/hive-migration/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/hive/index.html b/docs/1.3.0/hive/index.html
index 216bc849..6af37499 100644
--- a/docs/1.3.0/hive/index.html
+++ b/docs/1.3.0/hive/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a id=active hre [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/index.html b/docs/1.3.0/index.html
index cdb35adc..de92e08a 100644
--- a/docs/1.3.0/index.html
+++ b/docs/1.3.0/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=./branching/>Branching and Tagging</a></li><li><a href=./configuration/>Configuration</a></li><li><a href=./evolution/>Evolution</a></li><li><a href=./maintenance/>Maintenance</a></li><li><a href=./partitioning/>Partitioning</a></li><li><a href=./performance/>Performance</a></li><li><a href=./reliability/>Reliability</a></li><li><a href=./schemas/>Schemas</a></li></ul></div><li><a clas [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=./getting-started/>Getting Started</a></li><li><a href=./spark-ddl/>DDL</a></li><li><a href=./spark-procedures/>Procedures</a></li><li><a href=./spark-queries/>Queries</a></li><li><a href=./spark-structured-streaming/>Structured Streaming</a></li><li><a href=./spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href=#Flin [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=./getting-started/>Getting Started</a></li><li><a href=./spark-configuration/>Configuration</a></li><li><a href=./spark-ddl/>DDL</a></li><li><a href=./spark-procedures/>Procedures</a></li><li><a href=./spark-queries/>Queries</a></li><li><a href=./spark-structured-streaming/>Structured Streaming</a></li><li><a href=./spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle co [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=./flink/>Flink Getting Started</a></li><li><a href=./flink-connector/>Flink Connector</a></li><li><a href=./flink-ddl/>Flink DDL</a></li><li><a href=./flink-queries/>Flink Queries</a></li><li><a href=./flink-writes/>Flink Writes</a></li><li><a href=./flink-actions/>Flink Actions</a></li><li><a href=./flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=./hive/><span>H [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/index.xml b/docs/1.3.0/index.xml
index 0ffdd9da..bf0861e5 100644
--- a/docs/1.3.0/index.xml
+++ b/docs/1.3.0/index.xml
@@ -4,10 +4,11 @@ Using Iceberg in Spark 3 To use Iceberg in a Spark shell, use the --packages opt
 Feature support Iceberg compatibility with Hive 2.x and Hive 3.1.2/3 supports the following features:
 Creating a table Dropping a table Reading a table Inserting into a table (INSERT INTO) DML operations work only with MapReduce execution engine. With Hive version 4.0.0-alpha-2 and above, the Iceberg integration when using HiveCatalog supports the following additional features:
 Altering a table with expiring snapshots.</description></item><item><title>AWS</title><link>https://iceberg.apache.org/docs/1.3.0/aws/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/docs/1.3.0/aws/</guid><description>Iceberg AWS Integrations Iceberg provides integration with different AWS services through the iceberg-aws module. This section describes how to use Iceberg with AWS.
-Enabling AWS Integration The iceberg-aws module is bundled with Spark and Flink engine runtimes for all versions from 0.11.0 onwards. However, the AWS clients are not bundled so that you can use the same client version as your application. You will need to provide the AWS v2 SDK because that is what Iceberg depends on.</description></item><item><title>Branching and Tagging</title><link>https://iceberg.apache.org/docs/1.3.0/branching/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDat [...]
+Enabling AWS Integration The iceberg-aws module is bundled with Spark and Flink engine runtimes for all versions from 0.11.0 onwards. However, the AWS clients are not bundled so that you can use the same client version as your application. You will need to provide the AWS v2 SDK because that is what Iceberg depends on.</description></item><item><title>Branching and Tagging</title><link>https://iceberg.apache.org/docs/1.3.0/branching/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDat [...]
+For more sophisticated snapshot lifecycle management, Iceberg supports branches and tags which are named references to snapshots with their own independent lifecycles.</description></item><item><title>Configuration</title><link>https://iceberg.apache.org/docs/1.3.0/configuration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/docs/1.3.0/configuration/</guid><description>Configuration Table properties Iceberg tables support table properties to con [...]
 Read properties Property Default Description read.split.target-size 134217728 (128 MB) Target size when combining data input splits read.split.metadata-target-size 33554432 (32 MB) Target size when combining metadata input splits read.split.planning-lookback 10 Number of bins to consider when combining input splits read.split.open-file-cost 4194304 (4 MB) The estimated cost to open a file, used as a minimum weight when combining splits.</description></item><item><title>Configuration</tit [...]
 This creates an Iceberg catalog named hive_prod that loads tables from a Hive metastore:
-spark.sql.catalog.hive_prod = org.apache.iceberg.spark.SparkCatalog spark.sql.catalog.hive_prod.type = hive spark.sql.catalog.hive_prod.uri = thrift://metastore-host:port # omit uri to use the same URI as Spark: hive.metastore.uris in hive-site.xml Iceberg also supports a directory-based catalog in HDFS that can be configured using type=hadoop:</description></item><item><title>DDL</title><link>https://iceberg.apache.org/docs/1.3.0/spark-ddl/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000 [...]
+spark.sql.catalog.hive_prod = org.apache.iceberg.spark.SparkCatalog spark.sql.catalog.hive_prod.type = hive spark.sql.catalog.hive_prod.uri = thrift://metastore-host:port # omit uri to use the same URI as Spark: hive.metastore.uris in hive-site.xml Below is an example for a REST catalog named rest_prod that loads tables from REST URL http://localhost:8080:</description></item><item><title>DDL</title><link>https://iceberg.apache.org/docs/1.3.0/spark-ddl/</link><pubDate>Mon, 01 Jan 0001 00 [...]
 CREATE TABLE Spark 3 can create tables in any Iceberg catalog with the clause USING iceberg:
 CREATE TABLE prod.db.sample ( id bigint COMMENT &amp;#39;unique id&amp;#39;, data string) USING iceberg Iceberg will convert the column type in Spark to corresponding Iceberg type. Please check the section of type compatibility on creating table for details.</description></item><item><title>Dell</title><link>https://iceberg.apache.org/docs/1.3.0/dell/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/docs/1.3.0/dell/</guid><description>Iceberg Dell  [...]
 See Dell ECS for more information on Dell ECS.
diff --git a/docs/1.3.0/java-api-quickstart/index.html b/docs/1.3.0/java-api-quickstart/index.html
index 00924a19..f10d5fd3 100644
--- a/docs/1.3.0/java-api-quickstart/index.html
+++ b/docs/1.3.0/java-api-quickstart/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/jdbc/index.html b/docs/1.3.0/jdbc/index.html
index a42fce98..88c75d63 100644
--- a/docs/1.3.0/jdbc/index.html
+++ b/docs/1.3.0/jdbc/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/maintenance/index.html b/docs/1.3.0/maintenance/index.html
index 038b5680..1045e321 100644
--- a/docs/1.3.0/maintenance/index.html
+++ b/docs/1.3.0/maintenance/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class="collapse in"><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a id=active href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/nessie/index.html b/docs/1.3.0/nessie/index.html
index 6be87a5f..cdd628eb 100644
--- a/docs/1.3.0/nessie/index.html
+++ b/docs/1.3.0/nessie/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/partitioning/index.html b/docs/1.3.0/partitioning/index.html
index 7536acab..8bd6b221 100644
--- a/docs/1.3.0/partitioning/index.html
+++ b/docs/1.3.0/partitioning/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class="collapse in"><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a id=active href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/performance/index.html b/docs/1.3.0/performance/index.html
index ce8eaead..7d28e7d6 100644
--- a/docs/1.3.0/performance/index.html
+++ b/docs/1.3.0/performance/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class="collapse in"><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a id=active href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/reliability/index.html b/docs/1.3.0/reliability/index.html
index 3d53c24b..0e01d25f 100644
--- a/docs/1.3.0/reliability/index.html
+++ b/docs/1.3.0/reliability/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class="collapse in"><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a id=active href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/schemas/index.html b/docs/1.3.0/schemas/index.html
index 03f68cb7..423efb18 100644
--- a/docs/1.3.0/schemas/index.html
+++ b/docs/1.3.0/schemas/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class="collapse in"><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a id=active href=../schemas/>Schemas</a></li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/spark-configuration/index.html b/docs/1.3.0/spark-configuration/index.html
index 331dc177..c7849bf4 100644
--- a/docs/1.3.0/spark-configuration/index.html
+++ b/docs/1.3.0/spark-configuration/index.html
@@ -5,9 +5,9 @@
 <span class=icon-bar></span></button>
 <a class="page-scroll navbar-brand" href=https://iceberg.apache.org/><img class=top-navbar-logo src=https://iceberg.apache.org/docs/1.3.0//img/iceberg-logo-icon.png> Apache Iceberg</a></div><div><input type=search class=form-control id=search-input placeholder=Search... maxlength=64 data-hotkeys=s/></div><div class=versions-dropdown><span>1.2.1</span> <i class="fa fa-chevron-down"></i><div class=versions-dropdown-content><ul><li class=versions-dropdown-selection><a href=https://iceberg.a [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a id=active href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a cl [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
@@ -19,10 +19,13 @@
 </span></span><span style=display:flex><span>spark.sql.catalog.hive_prod.type = hive
 </span></span><span style=display:flex><span>spark.sql.catalog.hive_prod.uri = thrift://metastore-host:port
 </span></span><span style=display:flex><span># omit uri to use the same URI as Spark: hive.metastore.uris in hive-site.xml
+</span></span></code></pre></div><p>Below is an example for a REST catalog named <code>rest_prod</code> that loads tables from REST URL <code>http://localhost:8080</code>:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-plain data-lang=plain><span style=display:flex><span>spark.sql.catalog.rest_prod = org.apache.iceberg.spark.SparkCatalog
+</span></span><span style=display:flex><span>spark.sql.catalog.rest_prod.type = rest
+</span></span><span style=display:flex><span>spark.sql.catalog.rest_prod.uri = http://localhost:8080
 </span></span></code></pre></div><p>Iceberg also supports a directory-based catalog in HDFS that can be configured using <code>type=hadoop</code>:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-plain data-lang=plain><span style=display:flex><span>spark.sql.catalog.hadoop_prod = org.apache.iceberg.spark.SparkCatalog
 </span></span><span style=display:flex><span>spark.sql.catalog.hadoop_prod.type = hadoop
 </span></span><span style=display:flex><span>spark.sql.catalog.hadoop_prod.warehouse = hdfs://nn:8020/warehouse/path
-</span></span></code></pre></div><div class=info>The Hive-based catalog only loads Iceberg tables. To load non-Iceberg tables in the same Hive metastore, use a <a href=#replacing-the-session-catalog>session catalog</a>.</div><h3 id=catalog-configuration>Catalog configuration</h3><p>A catalog is created and named by adding a property <code>spark.sql.catalog.(catalog-name)</code> with an implementation class for its value.</p><p>Iceberg supplies two implementations:</p><ul><li><code>org.ap [...]
+</span></span></code></pre></div><div class=info>The Hive-based catalog only loads Iceberg tables. To load non-Iceberg tables in the same Hive metastore, use a <a href=#replacing-the-session-catalog>session catalog</a>.</div><h3 id=catalog-configuration>Catalog configuration</h3><p>A catalog is created and named by adding a property <code>spark.sql.catalog.(catalog-name)</code> with an implementation class for its value.</p><p>Iceberg supplies two implementations:</p><ul><li><code>org.ap [...]
 </span></span></span></code></pre></div><p>Spark 3 keeps track of the current catalog and namespace, which can be omitted from table names.</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span>USE hive_prod.db;
 </span></span><span style=display:flex><span><span style=color:#66d9ef>SELECT</span> <span style=color:#f92672>*</span> <span style=color:#66d9ef>FROM</span> <span style=color:#66d9ef>table</span> <span style=color:#75715e>-- load db.table from catalog hive_prod
 </span></span></span></code></pre></div><p>To see the current catalog and namespace, run <code>SHOW CURRENT NAMESPACE</code>.</p><h3 id=replacing-the-session-catalog>Replacing the session catalog</h3><p>To add Iceberg table support to Spark&rsquo;s built-in catalog, configure <code>spark_catalog</code> to use Iceberg&rsquo;s <code>SparkSessionCatalog</code>.</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code [...]
diff --git a/docs/1.3.0/spark-ddl/index.html b/docs/1.3.0/spark-ddl/index.html
index 501eccdc..e19c5935 100644
--- a/docs/1.3.0/spark-ddl/index.html
+++ b/docs/1.3.0/spark-ddl/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a id=active href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-p [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a id=active href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a cl [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/spark-procedures/index.html b/docs/1.3.0/spark-procedures/index.html
index dbb857ec..d067b076 100644
--- a/docs/1.3.0/spark-procedures/index.html
+++ b/docs/1.3.0/spark-procedures/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a id=active href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-p [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a id=active href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a cl [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
@@ -42,16 +42,20 @@ Using the same defaults as bin-pack to determine which files to rewrite.</p><div
 </span></span></code></pre></div><h3 id=rewrite_manifests><code>rewrite_manifests</code></h3><p>Rewrite manifests for a table to optimize scan planning.</p><p>Data files in manifests are sorted by fields in the partition spec. This procedure runs in parallel using a Spark job.</p><p>See the <a href=../../../javadoc/1.2.1/org/apache/iceberg/actions/RewriteManifests.html><code>RewriteManifests</code> Javadoc</a>
 to see more configuration options.</p><div class=info>This procedure invalidates all cached Spark plans that reference the affected table.</div><h4 id=usage-8>Usage</h4><table><thead><tr><th>Argument Name</th><th>Required?</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>table</code></td><td>✔️</td><td>string</td><td>Name of the table to update</td></tr><tr><td><code>use_caching</code></td><td>️</td><td>boolean</td><td>Use Spark caching during operation (defaults to [...]
 </span></span></code></pre></div><p>Rewrite the manifests in table <code>db.sample</code> and disable the use of Spark caching. This could be done to avoid memory issues on executors.</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CALL</span> <span style=color:#66d9ef>catalog_name</span>.<span style=color:#66d9ef>sy [...]
+</span></span></code></pre></div><h3 id=rewrite_position_delete_files><code>rewrite_position_delete_files</code></h3><p>Iceberg can rewrite position delete files, which serves two purposes:</p><ul><li>Minor Compaction: Compact small position delete files into larger ones. This reduces the size of metadata stored in manifest files and overhead of opening small delete files.</li><li>Remove Dangling Deletes: Filter out position delete records that refer to data files that are no longer live [...]
+for list of all the supported options for this procedure.</p><p>Dangling deletes are always filtered out during rewriting.</p><h4 id=output-8>Output</h4><table><thead><tr><th>Output Name</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>rewritten_delete_files_count</code></td><td>int</td><td>Number of delete files which were removed by this command</td></tr><tr><td><code>added_delete_files_count</code></td><td>int</td><td>Number of delete files which were added by th [...]
+</span></span></code></pre></div><p>Rewrite all position delete files in table <code>db.sample</code>, writing new files <code>target-file-size-bytes</code>. Dangling deletes are removed from rewritten delete files.</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CALL</span> <span style=color:#66d9ef>catalog_name</sp [...]
+</span></span></code></pre></div><p>Rewrite position delete files in table <code>db.sample</code>. This selects position delete files in partitions where 2 or more position delete files need to be rewritten based on size criteria. Dangling deletes are removed from rewritten delete files.</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span s [...]
 </span></span></code></pre></div><h2 id=table-migration>Table migration</h2><p>The <code>snapshot</code> and <code>migrate</code> procedures help test and migrate existing Hive or Spark tables to Iceberg.</p><h3 id=snapshot><code>snapshot</code></h3><p>Create a light-weight temporary copy of a table for testing, without changing the source table.</p><p>The newly created table can be changed or written to without affecting the source table, but the snapshot uses the original table&rsquo;s [...]
 actions like <code>expire_snapshots</code> which would physically delete data files. Iceberg deletes, which only effect metadata,
 are still allowed. In addition, any operations which affect the original data files will disrupt the Snapshot&rsquo;s
 integrity. DELETE statements executed against the original Hive table will remove original data files and the
-<code>snapshot</code> table will no longer be able to access them.</div><p>See <a href=#migrate><code>migrate</code></a> to replace an existing table with an Iceberg table.</p><h4 id=usage-9>Usage</h4><table><thead><tr><th>Argument Name</th><th>Required?</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>source_table</code></td><td>✔️</td><td>string</td><td>Name of the table to snapshot</td></tr><tr><td><code>table</code></td><td>✔️</td><td>string</td><td>Name of the  [...]
+<code>snapshot</code> table will no longer be able to access them.</div><p>See <a href=#migrate><code>migrate</code></a> to replace an existing table with an Iceberg table.</p><h4 id=usage-10>Usage</h4><table><thead><tr><th>Argument Name</th><th>Required?</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>source_table</code></td><td>✔️</td><td>string</td><td>Name of the table to snapshot</td></tr><tr><td><code>table</code></td><td>✔️</td><td>string</td><td>Name of the [...]
 catalog&rsquo;s default location for <code>db.snap</code>.</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CALL</span> <span style=color:#66d9ef>catalog_name</span>.<span style=color:#66d9ef>system</span>.snapshot(<span style=color:#e6db74>&#39;db.sample&#39;</span>, <span style=color:#e6db74>&#39;db.snap&#39;</span>)
 </span></span></code></pre></div><p>Migrate an isolated Iceberg table which references table <code>db.sample</code> named <code>db.snap</code> at
 a manually specified location <code>/tmp/temptable/</code>.</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CALL</span> <span style=color:#66d9ef>catalog_name</span>.<span style=color:#66d9ef>system</span>.snapshot(<span style=color:#e6db74>&#39;db.sample&#39;</span>, <span style=color:#e6db74>&#39;db.snap&#39;</span [...]
 </span></span></code></pre></div><h3 id=migrate><code>migrate</code></h3><p>Replace a table with an Iceberg table, loaded with the source&rsquo;s data files.</p><p>Table schema, partitioning, properties, and location will be copied from the source table.</p><p>Migrate will fail if any table partition uses an unsupported format. Supported formats are Avro, Parquet, and ORC.
-Existing data files are added to the Iceberg table&rsquo;s metadata and can be read using a name-to-id mapping created from the original table schema.</p><p>To leave the original table intact while testing, use <a href=#snapshot><code>snapshot</code></a> to create new temporary table that shares source data files and schema.</p><p>By default, the original table is retained with the name <code>table_BACKUP_</code>.</p><h4 id=usage-10>Usage</h4><table><thead><tr><th>Argument Name</th><th>R [...]
+Existing data files are added to the Iceberg table&rsquo;s metadata and can be read using a name-to-id mapping created from the original table schema.</p><p>To leave the original table intact while testing, use <a href=#snapshot><code>snapshot</code></a> to create new temporary table that shares source data files and schema.</p><p>By default, the original table is retained with the name <code>table_BACKUP_</code>.</p><h4 id=usage-11>Usage</h4><table><thead><tr><th>Argument Name</th><th>R [...]
 </span></span></code></pre></div><p>Migrate <code>db.sample</code> in the current catalog to an Iceberg table without adding any additional properties:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CALL</span> <span style=color:#66d9ef>catalog_name</span>.<span style=color:#66d9ef>system</span>.migrate(<span style= [...]
 </span></span></code></pre></div><h3 id=add_files><code>add_files</code></h3><p>Attempts to directly add files from a Hive or file based table into a given Iceberg table. Unlike migrate or
 snapshot, <code>add_files</code> can import files from a specific partition or partitions and does not create a new Iceberg table.
@@ -59,7 +63,7 @@ This command will create metadata for the new files and will not move them. This
 of the files to determine if they actually match the schema of the Iceberg table. Upon completion, the Iceberg table
 will then treat these files as if they are part of the set of files owned by Iceberg. This means any subsequent
 <code>expire_snapshot</code> calls will be able to physically delete the added files. This method should not be used if
-<code>migrate</code> or <code>snapshot</code> are possible.</p><h4 id=usage-11>Usage</h4><table><thead><tr><th>Argument Name</th><th>Required?</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>table</code></td><td>✔️</td><td>string</td><td>Table which will have files added to</td></tr><tr><td><code>source_table</code></td><td>✔️</td><td>string</td><td>Table where files should come from, paths are also possible in the form of `file_format`.`path`</td></tr><tr><td><cod [...]
+<code>migrate</code> or <code>snapshot</code> are possible.</p><h4 id=usage-12>Usage</h4><table><thead><tr><th>Argument Name</th><th>Required?</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>table</code></td><td>✔️</td><td>string</td><td>Table which will have files added to</td></tr><tr><td><code>source_table</code></td><td>✔️</td><td>string</td><td>Table where files should come from, paths are also possible in the form of `file_format`.`path`</td></tr><tr><td><cod [...]
 <code>db.tbl</code>. Only add files that exist within partitions where <code>part_col_1</code> is equal to <code>A</code>.</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CALL</span> spark_catalog.<span style=color:#66d9ef>system</span>.add_files(
 </span></span><span style=display:flex><span><span style=color:#66d9ef>table</span> <span style=color:#f92672>=&gt;</span> <span style=color:#e6db74>&#39;db.tbl&#39;</span>,
 </span></span><span style=display:flex><span>source_table <span style=color:#f92672>=&gt;</span> <span style=color:#e6db74>&#39;db.src_tbl&#39;</span>,
@@ -70,18 +74,18 @@ files regardless of what partition they belong to.</p><div class=highlight><pre
 </span></span><span style=display:flex><span>  <span style=color:#66d9ef>table</span> <span style=color:#f92672>=&gt;</span> <span style=color:#e6db74>&#39;db.tbl&#39;</span>,
 </span></span><span style=display:flex><span>  source_table <span style=color:#f92672>=&gt;</span> <span style=color:#e6db74>&#39;`parquet`.`path/to/table`&#39;</span>
 </span></span><span style=display:flex><span>)
-</span></span></code></pre></div><h3 id=register_table><code>register_table</code></h3><p>Creates a catalog entry for a metadata.json file which already exists but does not have a corresponding catalog identifier.</p><h4 id=usage-12>Usage</h4><table><thead><tr><th>Argument Name</th><th>Required?</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>table</code></td><td>✔️</td><td>string</td><td>Table which is to be registered</td></tr><tr><td><code>metadata_file</code></ [...]
-Only use this procedure when the table is no longer registered in an existing catalog, or you are moving a table between catalogs.</div><h4 id=output-11>Output</h4><table><thead><tr><th>Output Name</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>current_snapshot_id</code></td><td>long</td><td>The current snapshot ID of the newly registered Iceberg table</td></tr><tr><td><code>total_records_count</code></td><td>long</td><td>Total records count of the newly registere [...]
+</span></span></code></pre></div><h3 id=register_table><code>register_table</code></h3><p>Creates a catalog entry for a metadata.json file which already exists but does not have a corresponding catalog identifier.</p><h4 id=usage-13>Usage</h4><table><thead><tr><th>Argument Name</th><th>Required?</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>table</code></td><td>✔️</td><td>string</td><td>Table which is to be registered</td></tr><tr><td><code>metadata_file</code></ [...]
+Only use this procedure when the table is no longer registered in an existing catalog, or you are moving a table between catalogs.</div><h4 id=output-12>Output</h4><table><thead><tr><th>Output Name</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>current_snapshot_id</code></td><td>long</td><td>The current snapshot ID of the newly registered Iceberg table</td></tr><tr><td><code>total_records_count</code></td><td>long</td><td>Total records count of the newly registere [...]
 </span></span><span style=display:flex><span>  <span style=color:#66d9ef>table</span> <span style=color:#f92672>=&gt;</span> <span style=color:#e6db74>&#39;db.tbl&#39;</span>,
 </span></span><span style=display:flex><span>  metadata_file <span style=color:#f92672>=&gt;</span> <span style=color:#e6db74>&#39;path/to/metadata/file.json&#39;</span>
 </span></span><span style=display:flex><span>)
-</span></span></code></pre></div><h2 id=metadata-information>Metadata information</h2><h3 id=ancestors_of><code>ancestors_of</code></h3><p>Report the live snapshot IDs of parents of a specified snapshot</p><h4 id=usage-13>Usage</h4><table><thead><tr><th>Argument Name</th><th>Required?</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>table</code></td><td>✔️</td><td>string</td><td>Name of the table to report live snapshot IDs</td></tr><tr><td><code>snapshot_id</code>< [...]
+</span></span></code></pre></div><h2 id=metadata-information>Metadata information</h2><h3 id=ancestors_of><code>ancestors_of</code></h3><p>Report the live snapshot IDs of parents of a specified snapshot</p><h4 id=usage-14>Usage</h4><table><thead><tr><th>Argument Name</th><th>Required?</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>table</code></td><td>✔️</td><td>string</td><td>Name of the table to report live snapshot IDs</td></tr><tr><td><code>snapshot_id</code>< [...]
 </span></span><span style=display:flex><span>      <span style=color:#ae81ff>\ </span>-&gt; C<span style=color:#e6db74>&#39; -&gt; (D&#39;</span><span style=color:#f92672>)</span>
 </span></span></code></pre></div><p>Not specifying the snapshot ID would return A -> B -> C&rsquo; -> D&rsquo;, while providing the snapshot ID of
-D as an argument would return A-> B -> C -> D</p></blockquote><h4 id=output-12>Output</h4><table><thead><tr><th>Output Name</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>snapshot_id</code></td><td>long</td><td>the ancestor snapshot id</td></tr><tr><td><code>timestamp</code></td><td>long</td><td>snapshot creation time</td></tr></tbody></table><h4 id=examples-9>Examples</h4><p>Get all the snapshot ancestors of current snapshots(default)</p><div class=highlight><pre [...]
+D as an argument would return A-> B -> C -> D</p></blockquote><h4 id=output-13>Output</h4><table><thead><tr><th>Output Name</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>snapshot_id</code></td><td>long</td><td>the ancestor snapshot id</td></tr><tr><td><code>timestamp</code></td><td>long</td><td>snapshot creation time</td></tr></tbody></table><h4 id=examples-10>Examples</h4><p>Get all the snapshot ancestors of current snapshots(default)</p><div class=highlight><pr [...]
 </span></span></code></pre></div><p>Get all the snapshot ancestors by a particular snapshot</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CALL</span> spark_catalog.<span style=color:#66d9ef>system</span>.ancestors_of(<span style=color:#e6db74>&#39;db.tbl&#39;</span>, <span style=color:#ae81ff>1</span>)
 </span></span><span style=display:flex><span><span style=color:#66d9ef>CALL</span> spark_catalog.<span style=color:#66d9ef>system</span>.ancestors_of(snapshot_id <span style=color:#f92672>=&gt;</span> <span style=color:#ae81ff>1</span>, <span style=color:#66d9ef>table</span> <span style=color:#f92672>=&gt;</span> <span style=color:#e6db74>&#39;db.tbl&#39;</span>)
-</span></span></code></pre></div><h2 id=change-data-capture>Change Data Capture</h2><h3 id=create_changelog_view><code>create_changelog_view</code></h3><p>Creates a view that contains the changes from a given table.</p><h4 id=usage-14>Usage</h4><table><thead><tr><th>Argument Name</th><th>Required?</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>table</code></td><td>✔️</td><td>string</td><td>Name of the source table for the changelog</td></tr><tr><td><code>changelog [...]
+</span></span></code></pre></div><h2 id=change-data-capture>Change Data Capture</h2><h3 id=create_changelog_view><code>create_changelog_view</code></h3><p>Creates a view that contains the changes from a given table.</p><h4 id=usage-15>Usage</h4><table><thead><tr><th>Argument Name</th><th>Required?</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><code>table</code></td><td>✔️</td><td>string</td><td>Name of the source table for the changelog</td></tr><tr><td><code>changelog [...]
 </span></span><span style=display:flex><span>  <span style=color:#66d9ef>table</span> <span style=color:#f92672>=&gt;</span> <span style=color:#e6db74>&#39;db.tbl&#39;</span>,
 </span></span><span style=display:flex><span>  <span style=color:#66d9ef>options</span> <span style=color:#f92672>=&gt;</span> <span style=color:#66d9ef>map</span>(<span style=color:#e6db74>&#39;start-snapshot-id&#39;</span>,<span style=color:#e6db74>&#39;1&#39;</span>,<span style=color:#e6db74>&#39;end-snapshot-id&#39;</span>, <span style=color:#e6db74>&#39;2&#39;</span>)
 </span></span><span style=display:flex><span>)
@@ -108,7 +112,7 @@ pair of a delete row and an insert row. Identifier columns are used for determin
 refer to the same row. If the two records share the same values for the identity columns they are considered to be before
 and after states of the same row. You can either set identifier fields in the table schema or input them as the procedure parameters.</p><p>The following example shows pre/post update images computation with an identifier column(<code>id</code>), where a row deletion
 and an insertion with the same <code>id</code> are treated as a single update operation. Specifically, suppose we have the following pair of rows:</p><table><thead><tr><th>id</th><th>name</th><th>_change_type</th></tr></thead><tbody><tr><td>3</td><td>Robert</td><td>DELETE</td></tr><tr><td>3</td><td>Dan</td><td>INSERT</td></tr></tbody></table><p>In this case, the procedure marks the row before the update as an <code>UPDATE_BEFORE</code> image and the row after the update
-as an <code>UPDATE_AFTER</code> image, resulting in the following pre/post update images:</p><table><thead><tr><th>id</th><th>name</th><th>_change_type</th></tr></thead><tbody><tr><td>3</td><td>Robert</td><td>UPDATE_BEFORE</td></tr><tr><td>3</td><td>Dan</td><td>UPDATE_AFTER</td></tr></tbody></table></div><div id=toc class=markdown-body><div id=full><nav id=TableOfContents><ul><li><a href=#usage>Usage</a><ul><li><a href=#named-arguments>Named arguments</a></li><li><a href=#positional-argu [...]
+as an <code>UPDATE_AFTER</code> image, resulting in the following pre/post update images:</p><table><thead><tr><th>id</th><th>name</th><th>_change_type</th></tr></thead><tbody><tr><td>3</td><td>Robert</td><td>UPDATE_BEFORE</td></tr><tr><td>3</td><td>Dan</td><td>UPDATE_AFTER</td></tr></tbody></table></div><div id=toc class=markdown-body><div id=full><nav id=TableOfContents><ul><li><a href=#usage>Usage</a><ul><li><a href=#named-arguments>Named arguments</a></li><li><a href=#positional-argu [...]
 <script src=https://iceberg.apache.org/docs/1.3.0//js/jquery.easing.min.js></script>
 <script type=text/javascript src=https://iceberg.apache.org/docs/1.3.0//js/search.js></script>
 <script src=https://iceberg.apache.org/docs/1.3.0//js/bootstrap.min.js></script>
diff --git a/docs/1.3.0/spark-queries/index.html b/docs/1.3.0/spark-queries/index.html
index 477bec79..9b60227b 100644
--- a/docs/1.3.0/spark-queries/index.html
+++ b/docs/1.3.0/spark-queries/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a id=active href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-p [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a id=active href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a cl [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/spark-structured-streaming/index.html b/docs/1.3.0/spark-structured-streaming/index.html
index a026362b..5b704486 100644
--- a/docs/1.3.0/spark-structured-streaming/index.html
+++ b/docs/1.3.0/spark-structured-streaming/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a id=active href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-p [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a id=active href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a cl [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
diff --git a/docs/1.3.0/spark-writes/index.html b/docs/1.3.0/spark-writes/index.html
index d9b89ee2..d18bd7d4 100644
--- a/docs/1.3.0/spark-writes/index.html
+++ b/docs/1.3.0/spark-writes/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a id=active href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-p [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class="collapse in"><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a id=active href=../spark-writes/>Writes</a></li></ul></div><li><a cl [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>
@@ -104,44 +104,48 @@ Using <code>format("iceberg")</code> loads an isolated table reference that will
 </span></span></code></pre></div><p>The Iceberg table location can also be specified by the <code>location</code> table property:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-scala data-lang=scala><span style=display:flex><span>data<span style=color:#f92672>.</span>writeTo<span style=color:#f92672>(</span><span style=color:#e6db74>&#34;prod.db.table&#34;</span><span style=color:#f92672>)</span>
 </span></span><span style=display:flex><span>    <span style=color:#f92672>.</span>tableProperty<span style=color:#f92672>(</span><span style=color:#e6db74>&#34;location&#34;</span><span style=color:#f92672>,</span> <span style=color:#e6db74>&#34;/path/to/location&#34;</span><span style=color:#f92672>)</span>
 </span></span><span style=display:flex><span>    <span style=color:#f92672>.</span>createOrReplace<span style=color:#f92672>()</span>
-</span></span></code></pre></div><h2 id=writing-to-partitioned-tables>Writing to partitioned tables</h2><p>Iceberg requires the data to be sorted according to the partition spec per task (Spark partition) in prior to write
-against partitioned table. This applies both Writing with SQL and Writing with DataFrames.</p><div class=info>Explicit sort is necessary because Spark doesn&rsquo;t allow Iceberg to request a sort before writing as of Spark 3.0.
-<a href=https://issues.apache.org/jira/browse/SPARK-23889>SPARK-23889</a> is filed to enable Iceberg to require specific
-distribution & sort order to Spark.</div><div class=info>Both global sort (<code>orderBy</code>/<code>sort</code>) and local sort (<code>sortWithinPartitions</code>) work for the requirement.</div><p>Let&rsquo;s go through writing the data against below sample table:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CR [...]
+</span></span></code></pre></div><h2 id=writing-distribution-modes>Writing Distribution Modes</h2><p>Iceberg&rsquo;s default Spark writers require that the data in each spark task is clustered by partition values. This
+distribution is required to minimize the number of file handles that are held open while writing. By default, starting
+in Iceberg 1.2.0, Iceberg also requests that Spark pre-sort data to be written to fit this distribution. The
+request to Spark is done through the table property <code>write.distribution-mode</code> with the value <code>hash</code>.</p><p>Let&rsquo;s go through writing the data against below sample table:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>CREATE</span> <span style=color:#66d9ef>TABLE</span> prod.db.sample (
 </span></span><span style=display:flex><span>    id bigint,
 </span></span><span style=display:flex><span>    <span style=color:#66d9ef>data</span> string,
 </span></span><span style=display:flex><span>    category string,
 </span></span><span style=display:flex><span>    ts <span style=color:#66d9ef>timestamp</span>)
 </span></span><span style=display:flex><span><span style=color:#66d9ef>USING</span> iceberg
 </span></span><span style=display:flex><span>PARTITIONED <span style=color:#66d9ef>BY</span> (days(ts), category)
-</span></span></code></pre></div><p>To write data to the sample table, your data needs to be sorted by <code>days(ts), category</code>.</p><p>If you&rsquo;re inserting data with SQL statement, you can use <code>ORDER BY</code> to achieve it, like below:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>INSERT</span> <s [...]
+</span></span></code></pre></div><p>To write data to the sample table, data needs to be sorted by <code>days(ts), category</code> but this is taken care
+of automatically by the default <code>hash</code> distribution. Previously this would have required manually sorting, but this
+is no longer the case.</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>INSERT</span> <span style=color:#66d9ef>INTO</span> prod.db.sample
 </span></span><span style=display:flex><span><span style=color:#66d9ef>SELECT</span> id, <span style=color:#66d9ef>data</span>, category, ts <span style=color:#66d9ef>FROM</span> another_table
-</span></span><span style=display:flex><span><span style=color:#66d9ef>ORDER</span> <span style=color:#66d9ef>BY</span> ts, category
-</span></span></code></pre></div><p>If you&rsquo;re inserting data with DataFrame, you can use either <code>orderBy</code>/<code>sort</code> to trigger global sort, or <code>sortWithinPartitions</code>
-to trigger local sort. Local sort for example:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-scala data-lang=scala><span style=display:flex><span>data<span style=color:#f92672>.</span>sortWithinPartitions<span style=color:#f92672>(</span><span style=color:#e6db74>&#34;ts&#34;</span><span style=color:#f92672>,</span> <span style=color:#e6db74>&#34;category&#34;</span><span style=color:#f92 [...]
-</span></span><span style=display:flex><span>    <span style=color:#f92672>.</span>writeTo<span style=color:#f92672>(</span><span style=color:#e6db74>&#34;prod.db.sample&#34;</span><span style=color:#f92672>)</span>
-</span></span><span style=display:flex><span>    <span style=color:#f92672>.</span>append<span style=color:#f92672>()</span>
-</span></span></code></pre></div><p>You can simply add the original column to the sort condition for the most partition transformations, except <code>bucket</code>.</p><p>For <code>bucket</code> partition transformation, you need to register the Iceberg transform function in Spark to specify it during sort.</p><p>Let&rsquo;s go through another sample table having bucket partition:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab- [...]
-</span></span><span style=display:flex><span>    id bigint,
-</span></span><span style=display:flex><span>    <span style=color:#66d9ef>data</span> string,
-</span></span><span style=display:flex><span>    category string,
-</span></span><span style=display:flex><span>    ts <span style=color:#66d9ef>timestamp</span>)
-</span></span><span style=display:flex><span><span style=color:#66d9ef>USING</span> iceberg
-</span></span><span style=display:flex><span>PARTITIONED <span style=color:#66d9ef>BY</span> (bucket(<span style=color:#ae81ff>16</span>, id))
-</span></span></code></pre></div><p>You need to register the function to deal with bucket, like below:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-scala data-lang=scala><span style=display:flex><span><span style=color:#66d9ef>import</span> org.apache.iceberg.spark.IcebergSpark
-</span></span><span style=display:flex><span><span style=color:#66d9ef>import</span> org.apache.spark.sql.types.DataTypes
-</span></span><span style=display:flex><span>
-</span></span><span style=display:flex><span><span style=color:#a6e22e>IcebergSpark</span><span style=color:#f92672>.</span>registerBucketUDF<span style=color:#f92672>(</span>spark<span style=color:#f92672>,</span> <span style=color:#e6db74>&#34;iceberg_bucket16&#34;</span><span style=color:#f92672>,</span> <span style=color:#a6e22e>DataTypes</span><span style=color:#f92672>.</span><span style=color:#a6e22e>LongType</span><span style=color:#f92672>,</span> <span style=color:#ae81ff>16</s [...]
-</span></span></code></pre></div><div class=info>Explicit registration of the function is necessary because Spark doesn&rsquo;t allow Iceberg to provide functions.
-<a href=https://issues.apache.org/jira/browse/SPARK-27658>SPARK-27658</a> is filed to enable Iceberg to provide functions
-which can be used in query.</div><p>Here we just registered the bucket function as <code>iceberg_bucket16</code>, which can be used in sort clause.</p><p>If you&rsquo;re inserting data with SQL statement, you can use the function like below:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-sql data-lang=sql><span style=display:flex><span><span style=color:#66d9ef>INSERT</span> <span style=co [...]
-</span></span><span style=display:flex><span><span style=color:#66d9ef>SELECT</span> id, <span style=color:#66d9ef>data</span>, category, ts <span style=color:#66d9ef>FROM</span> another_table
-</span></span><span style=display:flex><span><span style=color:#66d9ef>ORDER</span> <span style=color:#66d9ef>BY</span> iceberg_bucket16(id)
-</span></span></code></pre></div><p>If you&rsquo;re inserting data with DataFrame, you can use the function like below:</p><div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-scala data-lang=scala><span style=display:flex><span>data<span style=color:#f92672>.</span>sortWithinPartitions<span style=color:#f92672>(</span>expr<span style=color:#f92672>(</span><span style=color:#e6db74>&#34;iceberg_buc [...]
-</span></span><span style=display:flex><span>    <span style=color:#f92672>.</span>writeTo<span style=color:#f92672>(</span><span style=color:#e6db74>&#34;prod.db.sample&#34;</span><span style=color:#f92672>)</span>
-</span></span><span style=display:flex><span>    <span style=color:#f92672>.</span>append<span style=color:#f92672>()</span>
-</span></span></code></pre></div><h2 id=type-compatibility>Type compatibility</h2><p>Spark and Iceberg support different set of types. Iceberg does the type conversion automatically, but not for all combinations,
-so you may want to understand the type conversion in Iceberg in prior to design the types of columns in your tables.</p><h3 id=spark-type-to-iceberg-type>Spark type to Iceberg type</h3><p>This type conversion table describes how Spark types are converted to the Iceberg types. The conversion applies on both creating Iceberg table and writing to Iceberg table via Spark.</p><table><thead><tr><th>Spark</th><th>Iceberg</th><th>Notes</th></tr></thead><tbody><tr><td>boolean</td><td>boolean</td> [...]
+</span></span></code></pre></div><p>There are 3 options for <code>write.distribution-mode</code></p><ul><li><code>none</code> - This is the previous default for Iceberg.<br>This mode does not request any shuffles or sort to be performed automatically by Spark. Because no work is done
+automatically by Spark, the data must be <em>manually</em> sorted by partition value. The data must be sorted either within
+each spark task, or globally within the entire dataset. A global sort will minimize the number of output files.<br>A sort can be avoided by using the Spark <a href=#write-properties>write fanout</a> property but this will cause all
+file handles to remain open until each write task has completed.</li><li><code>hash</code> - This mode is the new default and requests that Spark uses a hash-based exchange to shuffle the incoming
+write data before writing.<br>Practically, this means that each row is hashed based on the row&rsquo;s partition value and then placed
+in a corresponding Spark task based upon that value. Further division and coalescing of tasks may take place because of
+<a href=#controlling-file-sizes>Spark&rsquo;s Adaptive Query planning</a>.</li><li><code>range</code> - This mode requests that Spark perform a range based exchanged to shuffle the data before writing.<br>This is a two stage procedure which is more expensive than the <code>hash</code> mode. The first stage samples the data to
+be written based on the partition and sort columns. The second stage uses the range information to shuffle the input data into Spark
+tasks. Each task gets an exclusive range of the input data which clusters the data by partition and also globally sorts.<br>While this is more expensive than the hash distribution, the global ordering can be beneficial for read performance if
+sorted columns are used during queries. This mode is used by default if a table is created with a
+sort-order. Further division and coalescing of tasks may take place because of
+<a href=#controlling-file-sizes>Spark&rsquo;s Adaptive Query planning</a>.</li></ul><h2 id=controlling-file-sizes>Controlling File Sizes</h2><p>When writing data to Iceberg with Spark, it&rsquo;s important to note that Spark cannot write a file larger than a Spark
+task and a file cannot span an Iceberg partition boundary. This means although Iceberg will always roll over a file
+when it grows to <a href=../configuration/#write-properties><code>write.target-file-size-bytes</code></a>, but unless the Spark task is
+large enough that will not happen. The size of the file created on disk will also be much smaller than the Spark task
+since the on disk data will be both compressed and in columnar format as opposed to Spark&rsquo;s uncompressed row
+representation. This means a 100 megabyte Spark task will create a file much smaller than 100 megabytes even if that
+task is writing to a single Iceberg partition. If the task writes to multiple partitions, the files will be even
+smaller than that.</p><p>To control what data ends up in each Spark task use a <a href=#writing-distribution-modes><code>write distribution mode</code></a>
+or manually repartition the data.</p><p>To adjust Spark&rsquo;s task size it is important to become familiar with Spark&rsquo;s various Adaptive Query Execution (AQE)
+parameters. When the <code>write.distribution-mode</code> is not <code>none</code>, AQE will control the coalescing and splitting of Spark
+tasks during the exchange to try to create tasks of <code>spark.sql.adaptive.advisoryPartitionSizeInBytes</code> size. These
+settings will also affect any user performed re-partitions or sorts.
+It is important again to note that this is the in-memory Spark row size and not the on disk
+columnar-compressed size, so a larger value than the target file size will need to be specified. The ratio of
+in-memory size to on disk size is data dependent. Future work in Spark should allow Iceberg to automatically adjust this
+parameter at write time to match the <code>write.target-file-size-bytes</code>.</p><h2 id=type-compatibility>Type compatibility</h2><p>Spark and Iceberg support different set of types. Iceberg does the type conversion automatically, but not for all combinations,
+so you may want to understand the type conversion in Iceberg in prior to design the types of columns in your tables.</p><h3 id=spark-type-to-iceberg-type>Spark type to Iceberg type</h3><p>This type conversion table describes how Spark types are converted to the Iceberg types. The conversion applies on both creating Iceberg table and writing to Iceberg table via Spark.</p><table><thead><tr><th>Spark</th><th>Iceberg</th><th>Notes</th></tr></thead><tbody><tr><td>boolean</td><td>boolean</td> [...]
 <script src=https://iceberg.apache.org/docs/1.3.0//js/jquery.easing.min.js></script>
 <script type=text/javascript src=https://iceberg.apache.org/docs/1.3.0//js/search.js></script>
 <script src=https://iceberg.apache.org/docs/1.3.0//js/bootstrap.min.js></script>
diff --git a/docs/1.3.0/table-migration/index.html b/docs/1.3.0/table-migration/index.html
index 8244ea3d..fb8bf22a 100644
--- a/docs/1.3.0/table-migration/index.html
+++ b/docs/1.3.0/table-migration/index.html
@@ -7,7 +7,7 @@
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Tables class=collapse><ul class=sub-menu><li><a href=../branching/>Branching and Tagging</a></li><li><a href=../configuration/>Configuration</a></li><li><a href=../evolution/>Evolution</a></li><li><a href=../maintenance/>Maintenance</a></li><li><a href=../partitioning/>Partitioning</a></li><li><a href=../performance/>Performance</a></li><li><a href=../reliability/>Reliability</a></li><li><a href=../schemas/>Schemas</a></li></ul></div><li [...]
 <i class="fa fa-chevron-right"></i>
-<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-toggle collapsed" data-toggle=collapse data-parent=full href [...]
+<i class="fa fa-chevron-down"></i></a></li><div id=Spark class=collapse><ul class=sub-menu><li><a href=../getting-started/>Getting Started</a></li><li><a href=../spark-configuration/>Configuration</a></li><li><a href=../spark-ddl/>DDL</a></li><li><a href=../spark-procedures/>Procedures</a></li><li><a href=../spark-queries/>Queries</a></li><li><a href=../spark-structured-streaming/>Structured Streaming</a></li><li><a href=../spark-writes/>Writes</a></li></ul></div><li><a class="chevron-to [...]
 <i class="fa fa-chevron-right"></i>
 <i class="fa fa-chevron-down"></i></a></li><div id=Flink class=collapse><ul class=sub-menu><li><a href=../flink/>Flink Getting Started</a></li><li><a href=../flink-connector/>Flink Connector</a></li><li><a href=../flink-ddl/>Flink DDL</a></li><li><a href=../flink-queries/>Flink Queries</a></li><li><a href=../flink-writes/>Flink Writes</a></li><li><a href=../flink-actions/>Flink Actions</a></li><li><a href=../flink-configuration/>Flink Configuration</a></li></ul></div><li><a href=../hive/ [...]
 <i class="fa fa-chevron-right"></i>