You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@iceberg.apache.org by gi...@apache.org on 2022/02/04 21:03:30 UTC

[iceberg-docs] branch asf-site updated: deploy: 1fb0a2f871841ea2fbaf0da5d17315cbfd1910b8

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/iceberg-docs.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new e5dfd46  deploy: 1fb0a2f871841ea2fbaf0da5d17315cbfd1910b8
e5dfd46 is described below

commit e5dfd46c6a24f3186f7e64b661e8282ffcf39f9b
Author: jackye1995 <ja...@users.noreply.github.com>
AuthorDate: Fri Feb 4 21:03:22 2022 +0000

    deploy: 1fb0a2f871841ea2fbaf0da5d17315cbfd1910b8
---
 blogs/index.html    |   6 +++
 index.html          |  26 +++++-----
 index.xml           |  28 ++---------
 releases/index.html |  27 +++++++++-
 sitemap.xml         |   2 +-
 spec/index.html     | 139 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 6 files changed, 185 insertions(+), 43 deletions(-)

diff --git a/blogs/index.html b/blogs/index.html
index 8cda1d0..a2e5f8d 100644
--- a/blogs/index.html
+++ b/blogs/index.html
@@ -73,6 +73,12 @@
 <div class=markdown-body>
 <h2 id=iceberg-blogs>Iceberg Blogs</h2>
 <p>Here is a list of company blogs that talk about Iceberg. The blogs are ordered from most recent to oldest.</p>
+<h3 id=using-flink-cdc-to-synchronize-data-from-mysql-sharding-tables-and-build-real-time-data-lakehttpsververicagithubioflink-cdc-connectorsmastercontentquickstartbuild-real-time-data-lake-tutorialhtml><a href=https://ververica.github.io/flink-cdc-connectors/master/content/quickstart/build-real-time-data-lake-tutorial.html>Using Flink CDC to synchronize data from MySQL sharding tables and build real-time data lake</a></h3>
+<p><strong>Date</strong>: 11 November 2021, <strong>Company</strong>: Ververica, Alibaba Could
+<strong>Author</strong>: <a href=https://github.com/luoyuxia>Yuxia Luo</a>, <a href=https://github.com/wuchong>Jark Wu</a>, <a href=https://www.linkedin.com/in/zheng-hu-37017683/>Zheng Hu</a></p>
+<h3 id=metadata-indexing-in-iceberghttpstabularioblogiceberg-metadata-indexing><a href=https://tabular.io/blog/iceberg-metadata-indexing/>Metadata Indexing in Iceberg</a></h3>
+<p><strong>Date</strong>: 10 October 2021, <strong>Company</strong>: Tabular
+<strong>Author</strong>: <a href=https://www.linkedin.com/in/rdblue/>Ryan Blue</a></p>
 <h3 id=using-debezium-to-create-a-data-lake-with-apache-iceberghttpsdebeziumioblog20211020using-debezium-create-data-lake-with-apache-iceberg><a href=https://debezium.io/blog/2021/10/20/using-debezium-create-data-lake-with-apache-iceberg/>Using Debezium to Create a Data Lake with Apache Iceberg</a></h3>
 <p><strong>Date</strong>: October 20th, 2021, <strong>Company</strong>: Memiiso Community
 <strong>Author</strong>: <a href=https://www.linkedin.com/in/ismailsimsek/>Ismail Simsek</a></p>
diff --git a/index.html b/index.html
index c6ae7d4..0a74dec 100644
--- a/index.html
+++ b/index.html
@@ -122,10 +122,10 @@ Iceberg is a high-performance format for huge analytic tables. Iceberg brings th
 </div>
 </section>
 <section id=services>
-<div class=content-section-a>
+<div class=content-section-b>
 <div class=container>
 <div class=row>
-<div class="col-lg-5 col-sm-6">
+<div class="col-lg-5 col-lg-offset-1 col-sm-push-6 col-sm-6">
 <hr class=section-heading-spacer>
 <div class=clearfix></div>
 <h2 class=section-heading>Expressive SQL</h2>
@@ -138,7 +138,7 @@ Iceberg supports flexible SQL commands to merge new data, update existing rows,
 </li>
 </ul>
 </div>
-<div class="col-lg-5 col-lg col-sm-6">
+<div class="col-lg-5 col-sm-pull-6 col-sm-6">
 <div id=termynal-expressive-sql data-termynal data-ty-startdelay=2000 data-ty-typedelay=20 data-ty-linedelay=500>
 <span data-ty=input data-ty-cursor=▋ data-ty-prompt="sql>">MERGE INTO prod.nyc.taxis pt</span>
 <span data-ty=input data-ty-cursor=▋ data-ty-prompt>USING (SELECT * FROM staging.nyc.taxis) st</span>
@@ -186,10 +186,10 @@ Schema evolution just works. Adding a column won't bring back "zombie" data. Col
 </div>
 </div>
 </div>
-<div class=content-section-a>
+<div class=content-section-b>
 <div class=container>
 <div class=row>
-<div class="col-lg-5 col-sm-6">
+<div class="col-lg-5 col-lg-offset-1 col-sm-push-6 col-sm-6">
 <hr class=section-heading-spacer>
 <div class=clearfix></div>
 <h2 class=section-heading>Hidden Partitioning</h2>
@@ -202,19 +202,19 @@ Iceberg handles the tedious and error-prone task of producing partition values f
 </li>
 </ul>
 </div>
-<div class="col-lg-5 col-lg col-sm-6">
+<div class="col-lg-5 col-sm-pull-6 col-sm-6">
 </div>
-<div class="col-lg-5 col-lg-offset-2 col-sm-6">
+<div class="col-lg-5 col-sm-pull-6 col-sm-6">
 <script src=https://unpkg.com/@lottiefiles/lottie-player@latest/dist/lottie-player.js></script>
 <lottie-player src=https://iceberg.apache.org/lottie/hidden-partitioning-animation.json background=transparent speed=0.5 style="width: 600px; height: 400px;" loop autoplay></lottie-player>
 </div>
 </div>
 </div>
 </div>
-<div class=content-section-b>
+<div class=content-section-a>
 <div class=container>
 <div class=row>
-<div class="col-lg-5 col-lg-offset-1 col-sm-push-6 col-sm-6">
+<div class="col-lg-5 col-sm-6">
 <hr class=section-heading-spacer>
 <div class=clearfix></div>
 <h2 class=section-heading>Time Travel and Rollback</h2>
@@ -227,7 +227,7 @@ Time-travel enables reproducible queries that use exactly the same table snapsho
 </li>
 </ul>
 </div>
-<div class="col-lg-5 col-sm-pull-6 col-sm-6">
+<div class="col-lg-5 col-lg col-sm-6">
 <div class=termynal-container>
 <div id=termynal-time-travel data-termynal data-ty-startdelay=6000 data-ty-typedelay=20 data-ty-linedelay=500>
 <span data-ty=input data-ty-cursor=▋ data-ty-prompt="scala>">spark.read.table("taxis").count()</span>
@@ -247,16 +247,16 @@ Time-travel enables reproducible queries that use exactly the same table snapsho
 </div>
 </div>
 </div>
-<div class=content-section-a>
+<div class=content-section-b>
 <div class=container>
 <div class=row>
-<div class="col-lg-5 col-sm-6">
+<div class="col-lg-5 col-lg-offset-1 col-sm-push-6 col-sm-6">
 <hr class=section-heading-spacer>
 <div class=clearfix></div>
 <h2 class=section-heading>Data Compaction</h2>
 Data compaction is supported out-of-the-box and you can choose from different rewrite strategies such as bin-packing or sorting to optimize file layout and size.
 </div>
-<div class="col-lg-5 col-lg col-sm-6">
+<div class="col-lg-5 col-sm-pull-6 col-sm-6">
 <div id=termynal-data-compaction data-termynal data-ty-startdelay=8000 data-ty-typedelay=20 data-ty-linedelay=500>
 <span data-ty=input data-ty-cursor=▋ data-ty-prompt="sql>">CALL system.rewrite_data_files("nyc.taxis");</span>
 </div>
diff --git a/index.xml b/index.xml
index 42d9dcc..413189c 100644
--- a/index.xml
+++ b/index.xml
@@ -2,44 +2,22 @@
 Community discussions happen primarily on the dev mailing list, on apache-iceberg Slack workspace, and on specific GitHub issues.
 Contributing The Iceberg Project is hosted on Github at https://github.com/apache/iceberg.
 The Iceberg community prefers to receive contributions as Github pull requests.
-View open pull requests Learn about pull requests Issues Issues are tracked in GitHub:</description></item><item><title/><link>https://iceberg.apache.org/community/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/community/</guid><description>Welcome! Apache Iceberg tracks issues in GitHub and prefers to receive contributions as pull requests.
-Community discussions happen primarily on the dev mailing list, on apache-iceberg Slack workspace, and on specific GitHub issues.
-Contributing The Iceberg Project is hosted on Github at https://github.com/apache/iceberg.
-The Iceberg community prefers to receive contributions as Github pull requests.
 View open pull requests Learn about pull requests Issues Issues are tracked in GitHub:</description></item><item><title>Expressive SQL</title><link>https://iceberg.apache.org/services/expressive-sql/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/services/expressive-sql/</guid><description>"MERGE INTO prod.nyc.taxis pt USING (SELECT * FROM staging.nyc.taxis) st ON pt.id = st.id WHEN NOT MATCHED THEN INSERT *; Done! "</description></item><item><ti [...]
 Using Flink CDC to synchronize data from MySQL sharding tables and build real-time data lake Date: 11 November 2021, Company: Ververica, Alibaba Could Author: Yuxia Luo, Jark Wu, Zheng Hu
 Metadata Indexing in Iceberg Date: 10 October 2021, Company: Tabular Author: Ryan Blue
-Using Debezium to Create a Data Lake with Apache Iceberg Date: October 20th, 2021, Company: Memiiso Community Author: Ismail Simsek</description></item><item><title/><link>https://iceberg.apache.org/blogs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/blogs/</guid><description>Iceberg Blogs Here is a list of company blogs that talk about Iceberg. The blogs are ordered from most recent to oldest.
-Using Debezium to Create a Data Lake with Apache Iceberg Date: October 20th, 2021, Company: Memiiso Community Author: Ismail Simsek
-How to Analyze CDC Data in Iceberg Data Lake Using Flink Date: June 15, 2021, Company: Alibaba Cloud Community
-Author: Li Jinsong, Hu Zheng, Yang Weihai, Peidan Li</description></item><item><title>Full Schema Evolution</title><link>https://iceberg.apache.org/services/schema-evolution/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/services/schema-evolution/</guid><description>"ALTER TABLE taxis ALTER COLUMN trip_distance TYPE double; Done! "ALTER TABLE taxis ALTER COLUMN trip_distance AFTER fare; Done! "ALTER TABLE taxis RENAME COLUMN trip_distance TO dis [...]
-Expert Roundtable: The Future of Metadata After Hive Metastore Date: November 15, 2021, Authors: Lior Ebel, Seshu Adunuthula, Ryan Blue &amp;amp; Oz Katz
-Spark and Iceberg at Apple&amp;rsquo;s Scale - Leveraging differential files for efficient upserts and deletes Date: October 21, 2020, Author: Anton
-Apache Iceberg - A Table Format for Huge Analytic Datasets Date: October 21, 2020, Author: Ryan Blue</description></item><item><title/><link>https://iceberg.apache.org/talks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/talks/</guid><description>Iceberg Talks Here is a list of talks and other videos related to Iceberg.
+Using Debezium to Create a Data Lake with Apache Iceberg Date: October 20th, 2021, Company: Memiiso Community Author: Ismail Simsek</description></item><item><title>Full Schema Evolution</title><link>https://iceberg.apache.org/services/schema-evolution/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/services/schema-evolution/</guid><description>"ALTER TABLE taxis ALTER COLUMN trip_distance TYPE double; Done! "ALTER TABLE taxis ALTER COLUMN trip_d [...]
 Expert Roundtable: The Future of Metadata After Hive Metastore Date: November 15, 2021, Authors: Lior Ebel, Seshu Adunuthula, Ryan Blue &amp;amp; Oz Katz
 Spark and Iceberg at Apple&amp;rsquo;s Scale - Leveraging differential files for efficient upserts and deletes Date: October 21, 2020, Author: Anton
 Apache Iceberg - A Table Format for Huge Analytic Datasets Date: October 21, 2020, Author: Ryan Blue</description></item><item><title>Hidden Partitioning</title><link>https://iceberg.apache.org/services/hidden-partitioning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/services/hidden-partitioning/</guid><description/></item><item><title>Time Travel and Rollback</title><link>https://iceberg.apache.org/services/time-travel/</link><pubDate>Mon, 01 [...]
 0.12.1 source tar.gz &amp;ndash; signature &amp;ndash; sha512 0.12.1 Spark 3.0 runtime Jar 0.12.1 Spark 2.4 runtime Jar 0.12.1 Flink runtime Jar 0.12.1 Hive runtime Jar To use Iceberg in Spark, download the runtime JAR and add it to the jars folder of your Spark install. Use iceberg-spark3-runtime for Spark 3, and iceberg-spark-runtime for Spark 2.4.
-To use Iceberg in Hive, download the iceberg-hive-runtime JAR and add it to Hive using ADD JAR.</description></item><item><title/><link>https://iceberg.apache.org/releases/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/releases/</guid><description>Downloads The latest version of Iceberg is 0.12.1.
-0.12.1 source tar.gz &amp;ndash; signature &amp;ndash; sha512 0.12.1 Spark 3.0 runtime Jar 0.12.1 Spark 2.4 runtime Jar 0.12.1 Flink runtime Jar 0.12.1 Hive runtime Jar To use Iceberg in Spark, download the runtime JAR and add it to the jars folder of your Spark install. Use iceberg-spark3-runtime for Spark 3, and iceberg-spark-runtime for Spark 2.4.
 To use Iceberg in Hive, download the iceberg-hive-runtime JAR and add it to Hive using ADD JAR.</description></item><item><title/><link>https://iceberg.apache.org/spec/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/spec/</guid><description>Iceberg Table Spec This is a specification for the Iceberg table format that is designed to manage a large, slow-changing collection of files in a distributed file system or key-value store as a table.
 Format Versioning Versions 1 and 2 of the Iceberg spec are complete and adopted by the community.
 The format version number is incremented when new features are added that will break forward-compatibility&amp;mdash;that is, when older readers would not read newer table features correctly.</description></item><item><title/><link>https://iceberg.apache.org/terms/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/terms/</guid><description>Terms Snapshot A snapshot is the state of a table at some time.
 Each snapshot lists all of the data files that make up the table&amp;rsquo;s contents at the time of the snapshot. Data files are stored across multiple manifest files, and the manifests for a snapshot are listed in a single manifest list file.
-Manifest list A manifest list is a metadata file that lists the manifests that make up a table snapshot.</description></item><item><title/><link>https://iceberg.apache.org/how-to-verify-a-release/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/how-to-verify-a-release/</guid><description>How to Verify a Release Each Apache Iceberg release is validated by the community by holding a vote. A community release manager will prepare a release candidate  [...]
-Format Versioning Versions 1 and 2 of the Iceberg spec are complete and adopted by the community.
-The format version number is incremented when new features are added that will break forward-compatibility&amp;mdash;that is, when older readers would not read newer table features correctly.</description></item><item><title/><link>https://iceberg.apache.org/terms/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/terms/</guid><description>Terms Snapshot A snapshot is the state of a table at some time.
-Each snapshot lists all of the data files that make up the table&amp;rsquo;s contents at the time of the snapshot. Data files are stored across multiple manifest files, and the manifests for a snapshot are listed in a single manifest list file.
-Manifest list A manifest list is a metadata file that lists the manifests that make up a table snapshot.</description></item><item><title>Benchmarks</title><link>https://iceberg.apache.org/benchmarks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/benchmarks/</guid><description>Available Benchmarks and how to run them Benchmarks are located under &amp;lt;project-name&amp;gt;/jmh. It is generally favorable to only run the tests of interest rather  [...]
-Running Benchmarks on GitHub It is possible to run one or more Benchmarks via the JMH Benchmarks GH action on your own fork of the Iceberg repo.</description></item><item><title>Benchmarks</title><link>https://iceberg.apache.org/benchmarks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/benchmarks/</guid><description>Available Benchmarks and how to run them Benchmarks are located under &amp;lt;project-name&amp;gt;/jmh. It is generally favorable t [...]
+Manifest list A manifest list is a metadata file that lists the manifests that make up a table snapshot.</description></item><item><title/><link>https://iceberg.apache.org/how-to-verify-a-release/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/how-to-verify-a-release/</guid><description>How to Verify a Release Each Apache Iceberg release is validated by the community by holding a vote. A community release manager will prepare a release candidate  [...]
 Running Benchmarks on GitHub It is possible to run one or more Benchmarks via the JMH Benchmarks GH action on your own fork of the Iceberg repo.</description></item><item><title>How To Release</title><link>https://iceberg.apache.org/how-to-release/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/how-to-release/</guid><description>Setup To create a release candidate, you will need:
 Apache LDAP credentals for Nexus and SVN A GPG key for signing, published in KEYS Nexus access Nexus credentials are configured in your personal ~/.gradle/gradle.properties file using mavenUser and mavenPassword:
-mavenUser=yourApacheID mavenPassword=SomePassword PGP signing The release scripts use the command-line gpg utility so that signing can use the gpg-agent and does not require writing your private key&amp;rsquo;s passphrase to a configuration file.</description></item><item><title>How To Release</title><link>https://iceberg.apache.org/how-to-release/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/how-to-release/</guid><description>Setup To create a [...]
-Apache LDAP credentals for Nexus and SVN A GPG key for signing, published in KEYS Nexus access Nexus credentials are configured in your personal ~/.gradle/gradle.properties file using mavenUser and mavenPassword:
 mavenUser=yourApacheID mavenPassword=SomePassword PGP signing The release scripts use the command-line gpg utility so that signing can use the gpg-agent and does not require writing your private key&amp;rsquo;s passphrase to a configuration file.</description></item><item><title>Roadmap</title><link>https://iceberg.apache.org/roadmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/roadmap/</guid><description>Roadmap Overview This roadmap outlines [...]
-Priority 1 API: Iceberg 1.0.0 [medium] Spark: Merge-on-read plans [large] Maintenance: Delete file compaction [medium] Flink: Upgrade to 1.</description></item><item><title>Roadmap</title><link>https://iceberg.apache.org/roadmap/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/roadmap/</guid><description>Roadmap Overview This roadmap outlines projects that the Iceberg community is working on, their priority, and a rough size estimate. This is base [...]
 Priority 1 API: Iceberg 1.0.0 [medium] Spark: Merge-on-read plans [large] Maintenance: Delete file compaction [medium] Flink: Upgrade to 1.</description></item><item><title>Security</title><link>https://iceberg.apache.org/security/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/security/</guid><description>Reporting Security Issues The Apache Iceberg Project uses the standard process outlined by the Apache Security Team for reporting vulnerabilit [...]
 To report a possible security vulnerability, please email security@iceberg.apache.org.
-Verifying Signed Releases Please refer to the instructions on the Release Verification page.</description></item><item><title>Security</title><link>https://iceberg.apache.org/security/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/security/</guid><description>Reporting Security Issues The Apache Iceberg Project uses the standard process outlined by the Apache Security Team for reporting vulnerabilities. Note that vulnerabilities should not be pu [...]
-To report a possible security vulnerability, please email security@iceberg.apache.org.
-Verifying Signed Releases Please refer to the instructions on the Release Verification page.</description></item><item><title>Trademarks</title><link>https://iceberg.apache.org/trademarks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/trademarks/</guid><description>Trademarks Apache Iceberg, Iceberg, Apache, the Apache feather logo, and the Apache Iceberg project logo are either registered trademarks or trademarks of The Apache Software Foundati [...]
\ No newline at end of file
+Verifying Signed Releases Please refer to the instructions on the Release Verification page.</description></item><item><title>Trademarks</title><link>https://iceberg.apache.org/trademarks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/trademarks/</guid><description>Trademarks Apache Iceberg, Iceberg, Apache, the Apache feather logo, and the Apache Iceberg project logo are either registered trademarks or trademarks of The Apache Software Foundati [...]
\ No newline at end of file
diff --git a/releases/index.html b/releases/index.html
index 2c2630c..35e8b9f 100644
--- a/releases/index.html
+++ b/releases/index.html
@@ -99,8 +99,32 @@
   &lt;/dependency&gt;
   ...
 &lt;/dependencies&gt;
-</code></pre><h2 id=0120-release-notes>0.12.0 Release Notes</h2>
+</code></pre><h2 id=0121-release-notes>0.12.1 Release Notes</h2>
+<p>Apache Iceberg 0.12.1 was released on November 8th, 2021.</p>
+<p>Important bug fixes and changes:</p>
+<ul>
+<li><a href=https://github.com/apache/iceberg/pull/3258>#3264</a> fixes validation failures that occurred after snapshot expiration when writing Flink CDC streams to Iceberg tables.</li>
+<li><a href=https://github.com/apache/iceberg/pull/3264>#3264</a> fixes reading projected map columns from Parquet files written before Parquet 1.11.1.</li>
+<li><a href=https://github.com/apache/iceberg/pull/3195>#3195</a> allows validating that commits that produce row-level deltas don&rsquo;t conflict with concurrently added files. Ensures users can maintain serializable isolation for update and delete operations, including merge operations.</li>
+<li><a href=https://github.com/apache/iceberg/pull/3199>#3199</a> allows validating that commits that overwrite files don&rsquo;t conflict with concurrently added files. Ensures users can maintain serializable isolation for overwrite operations.</li>
+<li><a href=https://github.com/apache/iceberg/pull/3135>#3135</a> fixes equality-deletes using <code>DATE</code>, <code>TIMESTAMP</code>, and <code>TIME</code> types.</li>
+<li><a href=https://github.com/apache/iceberg/pull/3078>#3078</a> prevents the JDBC catalog from overwriting the <code>jdbc.user</code> property if any property called user exists in the environment.</li>
+<li><a href=https://github.com/apache/iceberg/pull/3035>#3035</a> fixes drop namespace calls with the DyanmoDB catalog.</li>
+<li><a href=https://github.com/apache/iceberg/pull/3273>#3273</a> fixes importing Avro files via <code>add_files</code> by correctly setting the number of records.</li>
+<li><a href=https://github.com/apache/iceberg/pull/3332>#3332</a> fixes importing ORC files with float or double columns in <code>add_files</code>.</li>
+</ul>
+<p>A more exhaustive list of changes is available under the <a href="https://github.com/apache/iceberg/milestone/15?closed=1">0.12.1 release milestone</a>.</p>
+<h2 id=past-releases>Past releases</h2>
+<h3 id=0120>0.12.0</h3>
 <p>Apache Iceberg 0.12.0 was released on August 15, 2021. It consists of 395 commits authored by 74 contributors over a 139 day period.</p>
+<ul>
+<li>Git tag: <a href=https://github.com/apache/iceberg/releases/tag/apache-iceberg-0.12.0>0.12.0</a></li>
+<li><a href=https://www.apache.org/dyn/closer.cgi/iceberg/apache-iceberg-0.12.0/apache-iceberg-0.12.0.tar.gz>0.12.0 source tar.gz</a> &ndash; <a href=https://downloads.apache.org/iceberg/apache-iceberg-0.12.0/apache-iceberg-0.12.0.tar.gz.asc>signature</a> &ndash; <a href=https://downloads.apache.org/iceberg/apache-iceberg-0.12.0/apache-iceberg-0.12.0.tar.gz.sha512>sha512</a></li>
+<li><a href="https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark3-runtime/0.12.0/iceberg-spark3-runtime-0.12.0.jar">0.12.0 Spark 3.x runtime Jar</a></li>
+<li><a href="https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime/0.12.0/iceberg-spark-runtime-0.12.0.jar">0.12.0 Spark 2.4 runtime Jar</a></li>
+<li><a href="https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-flink-runtime/0.12.0/iceberg-flink-runtime-0.12.0.jar">0.12.0 Flink runtime Jar</a></li>
+<li><a href="https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-hive-runtime/0.12.0/iceberg-hive-runtime-0.12.0.jar">0.12.0 Hive runtime Jar</a></li>
+</ul>
 <p><strong>High-level features:</strong></p>
 <ul>
 <li><strong>Core</strong>
@@ -187,7 +211,6 @@
 </ul>
 </li>
 </ul>
-<h2 id=past-releases>Past releases</h2>
 <h3 id=0111>0.11.1</h3>
 <ul>
 <li>Git tag: <a href=https://github.com/apache/iceberg/releases/tag/apache-iceberg-0.11.1>0.11.1</a></li>
diff --git a/sitemap.xml b/sitemap.xml
index 9e52612..98ddd06 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml"><url><loc>https://iceberg.apache.org/community/</loc></url><url><loc>https://iceberg.apache.org/community/</loc></url><url><loc>https://iceberg.apache.org/services/expressive-sql/</loc></url><url><loc>https://iceberg.apache.org/blogs/</loc></url><url><loc>https://iceberg.apache.org/blogs/</loc></url><url><loc>https://iceberg.apache. [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml"><url><loc>https://iceberg.apache.org/community/</loc></url><url><loc>https://iceberg.apache.org/services/expressive-sql/</loc></url><url><loc>https://iceberg.apache.org/blogs/</loc></url><url><loc>https://iceberg.apache.org/services/schema-evolution/</loc></url><url><loc>https://iceberg.apache.org/talks/</loc></url><url><loc>https:/ [...]
\ No newline at end of file
diff --git a/spec/index.html b/spec/index.html
index 6e9636d..2ef6103 100644
--- a/spec/index.html
+++ b/spec/index.html
@@ -329,6 +329,25 @@
 <h4 id=column-projection>Column Projection</h4>
 <p>Columns in Iceberg data files are selected by field id. The table schema&rsquo;s column names and order may change after a data file is written, and projection must be done using field ids. If a field id is missing from a data file, its value for each row should be <code>null</code>.</p>
 <p>For example, a file may be written with schema <code>1: a int, 2: b string, 3: c double</code> and read using projection schema <code>3: measurement, 2: name, 4: a</code>. This must select file columns <code>c</code> (renamed to <code>measurement</code>), <code>b</code> (now called <code>name</code>), and a column of <code>null</code> values called <code>a</code>; in that order.</p>
+<p>For example, a file may be written with schema <code>1: a int, 2: b string, 3: c double</code> and read using projection schema <code>3: measurement, 2: name, 4: a</code>. This must select file columns <code>c</code> (renamed to <code>measurement</code>), <code>b</code> (now called <code>name</code>), and a column of <code>null</code> values called <code>a</code>; in that order.</p>
+<p>Tables may also define a property <code>schema.name-mapping.default</code> with a JSON name mapping containing a list of field mapping objects. These mappings provide fallback field ids to be used when a data file does not contain field id information. Each object should contain</p>
+<ul>
+<li><code>names</code>: A required list of 0 or more names for a field.</li>
+<li><code>field-id</code>: An optional Iceberg field ID used when a field&rsquo;s name is present in <code>names</code></li>
+<li><code>fields</code>: An optional list of field mappings for child field of structs, maps, and lists.</li>
+</ul>
+<p>Field mapping fields are constrained by the following rules:</p>
+<ul>
+<li>A name may contain <code>.</code> but this refers to a literal name, not a nested field. For example, <code>a.b</code> refers to a field named <code>a.b</code>, not child field <code>b</code> of field <code>a</code>.</li>
+<li>Each child field should be defined with their own field mapping under <code>fields</code>.</li>
+<li>Multiple values for <code>names</code> may be mapped to a single field ID to support cases where a field may have different names in different data files. For example, all Avro field aliases should be listed in <code>names</code>.</li>
+<li>Fields which exist only in the Iceberg schema and not in imported data files may use an empty <code>names</code> list.</li>
+<li>Fields that exist in imported files but not in the Iceberg schema may omit <code>field-id</code>.</li>
+<li>List types should contain a mapping in <code>fields</code> for <code>element</code>.</li>
+<li>Map types should contain mappings in <code>fields</code> for <code>key</code> and <code>value</code>.</li>
+<li>Struct types should contain mappings in <code>fields</code> for their child fields.</li>
+</ul>
+<p>For details on serialization, see <a href=#name-mapping-serialization>Appendix C</a>.</p>
 <h4 id=identifier-field-ids>Identifier Field IDs</h4>
 <p>A schema can optionally track the set of primitive fields that identify rows in a table, using the property <code>identifier-field-ids</code> (see JSON encoding in Appendix C).</p>
 <p>Two rows are the &ldquo;same&rdquo;&mdash;that is, the rows represent the same entity&mdash;if the identifier fields are equal. However, uniqueness of rows by this identifier is not guaranteed or required by Iceberg and it is the responsibility of processing engines or data providers to enforce.</p>
@@ -1090,6 +1109,76 @@
 <li>An alternative, <em>strict projection</em>, creates a partition predicate that will match a file if all of the rows in the file must match the scan predicate. These projections are used to calculate the residual predicates for each file in a scan.</li>
 <li>For example, if <code>file_a</code> has rows with <code>id</code> between 1 and 10 and a delete file contains rows with <code>id</code> between 1 and 4, a scan for <code>id = 9</code> may ignore the delete file because none of the deletes can match a row that will be selected.</li>
 </ol>
+<h4 id=snapshot-reference>Snapshot Reference</h4>
+<p>Iceberg tables keep track of branches and tags using snapshot references.
+Tags are labels for individual snapshots. Branches are mutable named references that can be updated by committing a new snapshot as the branch&rsquo;s referenced snapshot using the <a href=#commit-conflict-resolution-and-retry>Commit Conflict Resolution and Retry</a> procedures.</p>
+<p>The snapshot reference object records all the information of a reference including snapshot ID, reference type and <a href=#snapshot-retention-policy>Snapshot Retention Policy</a>.</p>
+<table>
+<thead>
+<tr>
+<th>v1</th>
+<th>v2</th>
+<th>Field name</th>
+<th>Type</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><em>required</em></td>
+<td><em>required</em></td>
+<td><strong><code>snapshot-id</code></strong></td>
+<td><code>long</code></td>
+<td>A reference&rsquo;s snapshot ID. The tagged snapshot or latest snapshot of a branch.</td>
+</tr>
+<tr>
+<td><em>required</em></td>
+<td><em>required</em></td>
+<td><strong><code>type</code></strong></td>
+<td><code>string</code></td>
+<td>Type of the reference, <code>tag</code> or <code>branch</code></td>
+</tr>
+<tr>
+<td><em>optional</em></td>
+<td><em>optional</em></td>
+<td><strong><code>min-snapshots-to-keep</code></strong></td>
+<td><code>int</code></td>
+<td>For <code>branch</code> type only, a positive number for the minimum number of snapshots to keep in a branch while expiring snapshots. Defaults to table property <code>history.expire.min-snapshots-to-keep</code>.</td>
+</tr>
+<tr>
+<td><em>optional</em></td>
+<td><em>optional</em></td>
+<td><strong><code>max-snapshot-age-ms</code></strong></td>
+<td><code>long</code></td>
+<td>For <code>branch</code> type only, a positive number for the max age of snapshots to keep when expiring, including the latest snapshot. Defaults to table property <code>history.expire.max-snapshot-age-ms</code>.</td>
+</tr>
+<tr>
+<td><em>optional</em></td>
+<td><em>optional</em></td>
+<td><strong><code>max-ref-age-ms</code></strong></td>
+<td><code>long</code></td>
+<td>For snapshot references except the <code>main</code> branch, a positive number for the max age of the snapshot reference to keep while expiring snapshots. Defaults to table property <code>history.expire.max-ref-age-ms</code>. The <code>main</code> branch never expires.</td>
+</tr>
+</tbody>
+</table>
+<p>Valid snapshot references are stored as the values of the <code>refs</code> map in table metadata. For serialization, see Appendix C.</p>
+<h4 id=snapshot-retention-policy>Snapshot Retention Policy</h4>
+<p>Table snapshots expire and are removed from metadata to allow removed or replaced data files to be physically deleted.
+The snapshot expiration procedure removes snapshots from table metadata and applies the table&rsquo;s retention policy.
+Retention policy can be configured both globally and on snapshot reference through properties <code>min-snapshots-to-keep</code>, <code>max-snapshot-age-ms</code> and <code>max-ref-age-ms</code>.</p>
+<p>When expiring snapshots, retention policies in table and snapshot references are evaluated in the following way:</p>
+<ol>
+<li>Start with an empty set of snapshots to retain</li>
+<li>Remove any refs (other than main) where the referenced snapshot is older than <code>max-ref-age-ms</code></li>
+<li>For each branch and tag, add the referenced snapshot to the retained set</li>
+<li>For each branch, add its ancestors to the retained set until:
+<ol>
+<li>The snapshot is older than <code>max-snapshot-age-ms</code>, AND</li>
+<li>The snapshot is not one of the first <code>min-snapshots-to-keep</code> in the branch (including the branch&rsquo;s referenced snapshot)</li>
+</ol>
+</li>
+<li>Expire any snapshot not in the set of snapshots to retain.</li>
+</ol>
 <h3 id=table-metadata>Table Metadata</h3>
 <p>Table metadata is stored as JSON. Each table metadata change creates a new table metadata file that is committed by an atomic operation. This operation is used to ensure that a new version of table metadata replaces the version on which it was based. This produces a linear history of table versions and ensures that concurrent writes are not lost.</p>
 <p>The atomic operation used to commit metadata depends on how tables are tracked and is not standardized by this spec. See the sections below for examples.</p>
@@ -1193,7 +1282,7 @@
 <td><em>optional</em></td>
 <td><em>optional</em></td>
 <td><strong><code>current-snapshot-id</code></strong></td>
-<td><code>long</code> ID of the current table snapshot.</td>
+<td><code>long</code> ID of the current table snapshot; must be the same as the current ID of the <code>main</code> branch in <code>refs</code>.</td>
 </tr>
 <tr>
 <td><em>optional</em></td>
@@ -1225,6 +1314,12 @@
 <td><strong><code>default-sort-order-id</code></strong></td>
 <td>Default sort order id of the table. Note that this could be used by writers, but is not used when reading because reads use the specs stored in manifest files.</td>
 </tr>
+<tr>
+<td></td>
+<td><em>optional</em></td>
+<td><strong><code>refs</code></strong></td>
+<td>A map of snapshot references. The map keys are the unique snapshot reference names in the table, and the map values are snapshot reference objects. There is always a <code>main</code> branch reference pointing to the <code>current-snapshot-id</code> even if the <code>refs</code> map is null.</td>
+</tr>
 </tbody>
 </table>
 <p>For serialization details, see Appendix C.</p>
@@ -2238,9 +2333,49 @@ Hash results are not dependent on decimal scale, which is part of the type, not
 <td><code>JSON int</code></td>
 <td><code>0</code></td>
 </tr>
+<tr>
+<td><strong><code>refs</code></strong></td>
+<td><code>JSON map with string key and object value:</code><code>{</code>  <code>"&lt;name>": {</code>  <code>"snapshot-id": &lt;id>,</code>  <code>"type": &lt;type>,</code>  <code>"max-ref-age-ms": &lt;long>,</code>  <code>...</code>  <code>}</code>  <code>...</code><code>}</code></td>
+<td><code>{</code>  <code>"test": {</code>  <code>"snapshot-id": 123456789000,</code>  <code>"type": "tag",</code>  <code>"max-ref-age-ms": 10000000</code>  <code>}</code><code>}</code></td>
+</tr>
+</tbody>
+</table>
+<h3 id=name-mapping-serialization>Name Mapping Serialization</h3>
+<p>Name mapping is serialized as a list of field mapping JSON Objects which are serialized as follows</p>
+<table>
+<thead>
+<tr>
+<th>Field mapping field</th>
+<th>JSON representation</th>
+<th>Example</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><strong><code>names</code></strong></td>
+<td><code>JSON list of strings</code></td>
+<td><code>["latitude", "lat"]</code></td>
+</tr>
+<tr>
+<td><strong><code>field_id</code></strong></td>
+<td><code>JSON int</code></td>
+<td><code>1</code></td>
+</tr>
+<tr>
+<td><strong><code>fields</code></strong></td>
+<td><code>JSON field mappings (list of objects)</code></td>
+<td><code>[{ </code>  <code>"field-id": 4,</code>  <code>"names": ["latitude", "lat"]</code><code>}, {</code>  <code>"field-id": 5,</code>  <code>"names": ["longitude", "long"]</code><code>}]</code></td>
+</tr>
 </tbody>
 </table>
-<h2 id=appendix-d-single-value-serialization>Appendix D: Single-value serialization</h2>
+<p>Example</p>
+<div class=highlight><pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-json data-lang=json>[ { <span style=color:#f92672>&#34;field-id&#34;</span>: <span style=color:#ae81ff>1</span>, <span style=color:#f92672>&#34;names&#34;</span>: [<span style=color:#e6db74>&#34;id&#34;</span>, <span style=color:#e6db74>&#34;record_id&#34;</span>] },
+   { <span style=color:#f92672>&#34;field-id&#34;</span>: <span style=color:#ae81ff>2</span>, <span style=color:#f92672>&#34;names&#34;</span>: [<span style=color:#e6db74>&#34;data&#34;</span>] },
+   { <span style=color:#f92672>&#34;field-id&#34;</span>: <span style=color:#ae81ff>3</span>, <span style=color:#f92672>&#34;names&#34;</span>: [<span style=color:#e6db74>&#34;location&#34;</span>], <span style=color:#f92672>&#34;fields&#34;</span>: [
+       { <span style=color:#f92672>&#34;field-id&#34;</span>: <span style=color:#ae81ff>4</span>, <span style=color:#f92672>&#34;names&#34;</span>: [<span style=color:#e6db74>&#34;latitude&#34;</span>, <span style=color:#e6db74>&#34;lat&#34;</span>] },
+       { <span style=color:#f92672>&#34;field-id&#34;</span>: <span style=color:#ae81ff>5</span>, <span style=color:#f92672>&#34;names&#34;</span>: [<span style=color:#e6db74>&#34;longitude&#34;</span>, <span style=color:#e6db74>&#34;long&#34;</span>] }
+     ] } ]
+</code></pre></div><h2 id=appendix-d-single-value-serialization>Appendix D: Single-value serialization</h2>
 <p>This serialization scheme is for storing single values as individual binary values in the lower and upper bounds maps of manifest files.</p>
 <table>
 <thead>