You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@arrow.apache.org by gi...@apache.org on 2023/05/01 14:14:42 UTC

[arrow-datafusion] branch asf-site updated (b86b4a85db -> 2f79436d28)

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/arrow-datafusion.git


 discard b86b4a85db Publish built docs triggered by 7c53ef15aa33c98cd91a3735128c29356e92b19c
     new 2f79436d28 Publish built docs triggered by 6e8f91b41a6ec6a2680357b95b2489d87af33571

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (b86b4a85db)
            \
             N -- N -- N   refs/heads/asf-site (2f79436d28)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _sources/user-guide/configs.md.txt | 79 +++++++++++++++++++-------------------
 searchindex.js                     |  2 +-
 user-guide/configs.html            | 42 +++++++++++---------
 3 files changed, 64 insertions(+), 59 deletions(-)


[arrow-datafusion] 01/01: Publish built docs triggered by 6e8f91b41a6ec6a2680357b95b2489d87af33571

Posted by gi...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/arrow-datafusion.git

commit 2f79436d281b2554b6a4832814674428b12a5443
Author: github-actions[bot] <gi...@users.noreply.github.com>
AuthorDate: Mon May 1 14:14:36 2023 +0000

    Publish built docs triggered by 6e8f91b41a6ec6a2680357b95b2489d87af33571
---
 _sources/user-guide/configs.md.txt                 | 79 +++++++++++-----------
 _sources/user-guide/sql/data_types.md.txt          |  2 +-
 contributor-guide/architecture.html                |  2 +-
 contributor-guide/communication.html               |  2 +-
 contributor-guide/index.html                       |  2 +-
 contributor-guide/quarterly_roadmap.html           |  2 +-
 contributor-guide/roadmap.html                     |  2 +-
 contributor-guide/specification/index.html         |  2 +-
 contributor-guide/specification/invariants.html    |  2 +-
 .../specification/output-field-name-semantic.html  |  2 +-
 genindex.html                                      |  2 +-
 index.html                                         |  2 +-
 search.html                                        |  2 +-
 searchindex.js                                     |  2 +-
 user-guide/cli.html                                |  2 +-
 user-guide/configs.html                            | 44 ++++++------
 user-guide/dataframe.html                          |  2 +-
 user-guide/example-usage.html                      |  2 +-
 user-guide/expressions.html                        |  2 +-
 user-guide/faq.html                                |  2 +-
 user-guide/introduction.html                       |  2 +-
 user-guide/sql/aggregate_functions.html            |  2 +-
 user-guide/sql/data_types.html                     |  4 +-
 user-guide/sql/ddl.html                            |  2 +-
 user-guide/sql/explain.html                        |  2 +-
 user-guide/sql/index.html                          |  2 +-
 user-guide/sql/information_schema.html             |  2 +-
 user-guide/sql/scalar_functions.html               |  2 +-
 user-guide/sql/select.html                         |  2 +-
 user-guide/sql/sql_status.html                     |  2 +-
 user-guide/sql/subqueries.html                     |  2 +-
 31 files changed, 94 insertions(+), 89 deletions(-)

diff --git a/_sources/user-guide/configs.md.txt b/_sources/user-guide/configs.md.txt
index 4b754bcd8a..decab6a719 100644
--- a/_sources/user-guide/configs.md.txt
+++ b/_sources/user-guide/configs.md.txt
@@ -35,42 +35,43 @@ Values are parsed according to the [same rules used in casts from Utf8](https://
 If the value in the environment variable cannot be cast to the type of the configuration option, the default value will be used instead and a warning emitted.
 Environment variables are read during `SessionConfig` initialisation so they must be set beforehand and will not affect running sessions.
 
-| key                                                        | default    | description                                                                                                                                                                                                                                                                                                                                                                                                                      [...]
-| ---------------------------------------------------------- | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
-| datafusion.catalog.create_default_catalog_and_schema       | true       | Whether the default catalog and schema should be created automatically.                                                                                                                                                                                                                                                                                                                                                          [...]
-| datafusion.catalog.default_catalog                         | datafusion | The default catalog name - this impacts what SQL queries use if not specified                                                                                                                                                                                                                                                                                                                                                    [...]
-| datafusion.catalog.default_schema                          | public     | The default schema name - this impacts what SQL queries use if not specified                                                                                                                                                                                                                                                                                                                                                     [...]
-| datafusion.catalog.information_schema                      | false      | Should DataFusion provide access to `information_schema` virtual tables for displaying schema information                                                                                                                                                                                                                                                                                                                        [...]
-| datafusion.catalog.location                                | NULL       | Location scanned to load tables for `default` schema                                                                                                                                                                                                                                                                                                                                                                             [...]
-| datafusion.catalog.format                                  | NULL       | Type of `TableProvider` to use when loading `default` schema                                                                                                                                                                                                                                                                                                                                                                     [...]
-| datafusion.catalog.has_header                              | false      | If the file has a header                                                                                                                                                                                                                                                                                                                                                                                                         [...]
-| datafusion.execution.batch_size                            | 8192       | Default batch size while creating new batches, it's especially useful for buffer-in-memory batches since creating tiny batches would result in too much metadata memory consumption                                                                                                                                                                                                                                              [...]
-| datafusion.execution.coalesce_batches                      | true       | When set to true, record batches will be examined between each operator and small batches will be coalesced into larger batches. This is helpful when there are highly selective filters or joins that could produce tiny output batches. The target batch size is determined by the configuration setting                                                                                                                       [...]
-| datafusion.execution.collect_statistics                    | false      | Should DataFusion collect statistics after listing files                                                                                                                                                                                                                                                                                                                                                                         [...]
-| datafusion.execution.target_partitions                     | 0          | Number of partitions for query execution. Increasing partitions can increase concurrency. Defaults to the number of CPU cores on the system                                                                                                                                                                                                                                                                                      [...]
-| datafusion.execution.time_zone                             | +00:00     | The default time zone Some functions, e.g. `EXTRACT(HOUR from SOME_TIME)`, shift the underlying datetime according to this time zone, and then extract the hour                                                                                                                                                                                                                                                                  [...]
-| datafusion.execution.parquet.enable_page_index             | false      | If true, uses parquet data page level metadata (Page Index) statistics to reduce the number of rows decoded.                                                                                                                                                                                                                                                                                                                     [...]
-| datafusion.execution.parquet.pruning                       | true       | If true, the parquet reader attempts to skip entire row groups based on the predicate in the query and the metadata (min/max values) stored in the parquet file                                                                                                                                                                                                                                                                  [...]
-| datafusion.execution.parquet.skip_metadata                 | true       | If true, the parquet reader skip the optional embedded metadata that may be in the file Schema. This setting can help avoid schema conflicts when querying multiple parquet files with schemas containing compatible types but different metadata                                                                                                                                                                                [...]
-| datafusion.execution.parquet.metadata_size_hint            | NULL       | If specified, the parquet reader will try and fetch the last `size_hint` bytes of the parquet file optimistically. If not specified, two reads are required: One read to fetch the 8-byte parquet footer and another to fetch the metadata length encoded in the footer                                                                                                                                                          [...]
-| datafusion.execution.parquet.pushdown_filters              | false      | If true, filter expressions are be applied during the parquet decoding operation to reduce the number of rows decoded                                                                                                                                                                                                                                                                                                            [...]
-| datafusion.execution.parquet.reorder_filters               | false      | If true, filter expressions evaluated during the parquet decoding operation will be reordered heuristically to minimize the cost of evaluation. If false, the filters are applied in the same order as written in the query                                                                                                                                                                                                      [...]
-| datafusion.optimizer.enable_round_robin_repartition        | true       | When set to true, the physical plan optimizer will try to add round robin repartitioning to increase parallelism to leverage more CPU cores                                                                                                                                                                                                                                                                                      [...]
-| datafusion.optimizer.filter_null_join_keys                 | false      | When set to true, the optimizer will insert filters before a join between a nullable and non-nullable column to filter out nulls on the nullable side. This filter can add additional overhead when the file format does not fully support predicate push down.                                                                                                                                                                  [...]
-| datafusion.optimizer.repartition_aggregations              | true       | Should DataFusion repartition data using the aggregate keys to execute aggregates in parallel using the provided `target_partitions` level                                                                                                                                                                                                                                                                                       [...]
-| datafusion.optimizer.repartition_file_min_size             | 10485760   | Minimum total files size in bytes to perform file scan repartitioning.                                                                                                                                                                                                                                                                                                                                                           [...]
-| datafusion.optimizer.repartition_joins                     | true       | Should DataFusion repartition data using the join keys to execute joins in parallel using the provided `target_partitions` level                                                                                                                                                                                                                                                                                                 [...]
-| datafusion.optimizer.allow_symmetric_joins_without_pruning | true       | Should DataFusion allow symmetric hash joins for unbounded data sources even when its inputs do not have any ordering or filtering If the flag is not enabled, the SymmetricHashJoin operator will be unable to prune its internal buffers, resulting in certain join types - such as Full, Left, LeftAnti, LeftSemi, Right, RightAnti, and RightSemi - being produced only at the end of the execution. This is not typical in  [...]
-| datafusion.optimizer.repartition_file_scans                | true       | When set to true, file groups will be repartitioned to achieve maximum parallelism. Currently supported only for Parquet format in which case multiple row groups from the same file may be read concurrently. If false then each row group is read serially, though different files may be read in parallel.                                                                                                                    [...]
-| datafusion.optimizer.repartition_windows                   | true       | Should DataFusion repartition data using the partitions keys to execute window functions in parallel using the provided `target_partitions` level                                                                                                                                                                                                                                                                                [...]
-| datafusion.optimizer.repartition_sorts                     | true       | Should DataFusion execute sorts in a per-partition fashion and merge afterwards instead of coalescing first and sorting globally. With this flag is enabled, plans in the form below `text "SortExec: [a@0 ASC]", " CoalescePartitionsExec", " RepartitionExec: partitioning=RoundRobinBatch(8), input_partitions=1", ` would turn into the plan below which performs better in multithreaded environments `text "SortPreserving [...]
-| datafusion.optimizer.skip_failed_rules                     | true       | When set to true, the logical plan optimizer will produce warning messages if any optimization rules produce errors and then proceed to the next rule. When set to false, any rules that produce errors will cause the query to fail                                                                                                                                                                                             [...]
-| datafusion.optimizer.max_passes                            | 3          | Number of times that the optimizer will attempt to optimize the plan                                                                                                                                                                                                                                                                                                                                                             [...]
-| datafusion.optimizer.top_down_join_key_reordering          | true       | When set to true, the physical plan optimizer will run a top down process to reorder the join keys                                                                                                                                                                                                                                                                                                                               [...]
-| datafusion.optimizer.prefer_hash_join                      | true       | When set to true, the physical plan optimizer will prefer HashJoin over SortMergeJoin. HashJoin can work more efficiently than SortMergeJoin but consumes more memory                                                                                                                                                                                                                                                            [...]
-| datafusion.optimizer.hash_join_single_partition_threshold  | 1048576    | The maximum estimated size in bytes for one input side of a HashJoin will be collected into a single partition                                                                                                                                                                                                                                                                                                                   [...]
-| datafusion.explain.logical_plan_only                       | false      | When set to true, the explain statement will only print logical plans                                                                                                                                                                                                                                                                                                                                                            [...]
-| datafusion.explain.physical_plan_only                      | false      | When set to true, the explain statement will only print physical plans                                                                                                                                                                                                                                                                                                                                                           [...]
-| datafusion.sql_parser.parse_float_as_decimal               | false      | When set to true, SQL parser will parse float as decimal type                                                                                                                                                                                                                                                                                                                                                                    [...]
-| datafusion.sql_parser.enable_ident_normalization           | true       | When set to true, SQL parser will normalize ident (convert ident to lowercase when not quoted)                                                                                                                                                                                                                                                                                                                                   [...]
-| datafusion.sql_parser.dialect                              | generic    | Configure the SQL dialect used by DataFusion's parser; supported values include: Generic, MySQL, PostgreSQL, Hive, SQLite, Snowflake, Redshift, MsSQL, ClickHouse, BigQuery, and Ansi.                                                                                                                                                                                                                                           [...]
+| key                                                        | default    | description                                                                                                                                                                                                                                                                                                                                                                                                                      [...]
+| ---------------------------------------------------------- | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| datafusion.catalog.create_default_catalog_and_schema       | true       | Whether the default catalog and schema should be created automatically.                                                                                                                                                                                                                                                                                                                                                          [...]
+| datafusion.catalog.default_catalog                         | datafusion | The default catalog name - this impacts what SQL queries use if not specified                                                                                                                                                                                                                                                                                                                                                    [...]
+| datafusion.catalog.default_schema                          | public     | The default schema name - this impacts what SQL queries use if not specified                                                                                                                                                                                                                                                                                                                                                     [...]
+| datafusion.catalog.information_schema                      | false      | Should DataFusion provide access to `information_schema` virtual tables for displaying schema information                                                                                                                                                                                                                                                                                                                        [...]
+| datafusion.catalog.location                                | NULL       | Location scanned to load tables for `default` schema                                                                                                                                                                                                                                                                                                                                                                             [...]
+| datafusion.catalog.format                                  | NULL       | Type of `TableProvider` to use when loading `default` schema                                                                                                                                                                                                                                                                                                                                                                     [...]
+| datafusion.catalog.has_header                              | false      | If the file has a header                                                                                                                                                                                                                                                                                                                                                                                                         [...]
+| datafusion.execution.batch_size                            | 8192       | Default batch size while creating new batches, it's especially useful for buffer-in-memory batches since creating tiny batches would result in too much metadata memory consumption                                                                                                                                                                                                                                              [...]
+| datafusion.execution.coalesce_batches                      | true       | When set to true, record batches will be examined between each operator and small batches will be coalesced into larger batches. This is helpful when there are highly selective filters or joins that could produce tiny output batches. The target batch size is determined by the configuration setting                                                                                                                       [...]
+| datafusion.execution.collect_statistics                    | false      | Should DataFusion collect statistics after listing files                                                                                                                                                                                                                                                                                                                                                                         [...]
+| datafusion.execution.target_partitions                     | 0          | Number of partitions for query execution. Increasing partitions can increase concurrency. Defaults to the number of CPU cores on the system                                                                                                                                                                                                                                                                                      [...]
+| datafusion.execution.time_zone                             | +00:00     | The default time zone Some functions, e.g. `EXTRACT(HOUR from SOME_TIME)`, shift the underlying datetime according to this time zone, and then extract the hour                                                                                                                                                                                                                                                                  [...]
+| datafusion.execution.parquet.enable_page_index             | false      | If true, uses parquet data page level metadata (Page Index) statistics to reduce the number of rows decoded.                                                                                                                                                                                                                                                                                                                     [...]
+| datafusion.execution.parquet.pruning                       | true       | If true, the parquet reader attempts to skip entire row groups based on the predicate in the query and the metadata (min/max values) stored in the parquet file                                                                                                                                                                                                                                                                  [...]
+| datafusion.execution.parquet.skip_metadata                 | true       | If true, the parquet reader skip the optional embedded metadata that may be in the file Schema. This setting can help avoid schema conflicts when querying multiple parquet files with schemas containing compatible types but different metadata                                                                                                                                                                                [...]
+| datafusion.execution.parquet.metadata_size_hint            | NULL       | If specified, the parquet reader will try and fetch the last `size_hint` bytes of the parquet file optimistically. If not specified, two reads are required: One read to fetch the 8-byte parquet footer and another to fetch the metadata length encoded in the footer                                                                                                                                                          [...]
+| datafusion.execution.parquet.pushdown_filters              | false      | If true, filter expressions are be applied during the parquet decoding operation to reduce the number of rows decoded                                                                                                                                                                                                                                                                                                            [...]
+| datafusion.execution.parquet.reorder_filters               | false      | If true, filter expressions evaluated during the parquet decoding operation will be reordered heuristically to minimize the cost of evaluation. If false, the filters are applied in the same order as written in the query                                                                                                                                                                                                      [...]
+| datafusion.execution.aggregate.scalar_update_factor        | 10         | Specifies the threshold for using `ScalarValue`s to update accumulators during high-cardinality aggregations for each input batch. The aggregation is considered high-cardinality if the number of affected groups is greater than or equal to `batch_size / scalar_update_factor`. In such cases, `ScalarValue`s are utilized for updating accumulators, rather than the default batch-slice approach. This can lead to perform [...]
+| datafusion.optimizer.enable_round_robin_repartition        | true       | When set to true, the physical plan optimizer will try to add round robin repartitioning to increase parallelism to leverage more CPU cores                                                                                                                                                                                                                                                                                      [...]
+| datafusion.optimizer.filter_null_join_keys                 | false      | When set to true, the optimizer will insert filters before a join between a nullable and non-nullable column to filter out nulls on the nullable side. This filter can add additional overhead when the file format does not fully support predicate push down.                                                                                                                                                                  [...]
+| datafusion.optimizer.repartition_aggregations              | true       | Should DataFusion repartition data using the aggregate keys to execute aggregates in parallel using the provided `target_partitions` level                                                                                                                                                                                                                                                                                       [...]
+| datafusion.optimizer.repartition_file_min_size             | 10485760   | Minimum total files size in bytes to perform file scan repartitioning.                                                                                                                                                                                                                                                                                                                                                           [...]
+| datafusion.optimizer.repartition_joins                     | true       | Should DataFusion repartition data using the join keys to execute joins in parallel using the provided `target_partitions` level                                                                                                                                                                                                                                                                                                 [...]
+| datafusion.optimizer.allow_symmetric_joins_without_pruning | true       | Should DataFusion allow symmetric hash joins for unbounded data sources even when its inputs do not have any ordering or filtering If the flag is not enabled, the SymmetricHashJoin operator will be unable to prune its internal buffers, resulting in certain join types - such as Full, Left, LeftAnti, LeftSemi, Right, RightAnti, and RightSemi - being produced only at the end of the execution. This is not typical in  [...]
+| datafusion.optimizer.repartition_file_scans                | true       | When set to true, file groups will be repartitioned to achieve maximum parallelism. Currently supported only for Parquet format in which case multiple row groups from the same file may be read concurrently. If false then each row group is read serially, though different files may be read in parallel.                                                                                                                    [...]
+| datafusion.optimizer.repartition_windows                   | true       | Should DataFusion repartition data using the partitions keys to execute window functions in parallel using the provided `target_partitions` level                                                                                                                                                                                                                                                                                [...]
+| datafusion.optimizer.repartition_sorts                     | true       | Should DataFusion execute sorts in a per-partition fashion and merge afterwards instead of coalescing first and sorting globally. With this flag is enabled, plans in the form below `text "SortExec: [a@0 ASC]", " CoalescePartitionsExec", " RepartitionExec: partitioning=RoundRobinBatch(8), input_partitions=1", ` would turn into the plan below which performs better in multithreaded environments `text "SortPreserving [...]
+| datafusion.optimizer.skip_failed_rules                     | true       | When set to true, the logical plan optimizer will produce warning messages if any optimization rules produce errors and then proceed to the next rule. When set to false, any rules that produce errors will cause the query to fail                                                                                                                                                                                             [...]
+| datafusion.optimizer.max_passes                            | 3          | Number of times that the optimizer will attempt to optimize the plan                                                                                                                                                                                                                                                                                                                                                             [...]
+| datafusion.optimizer.top_down_join_key_reordering          | true       | When set to true, the physical plan optimizer will run a top down process to reorder the join keys                                                                                                                                                                                                                                                                                                                               [...]
+| datafusion.optimizer.prefer_hash_join                      | true       | When set to true, the physical plan optimizer will prefer HashJoin over SortMergeJoin. HashJoin can work more efficiently than SortMergeJoin but consumes more memory                                                                                                                                                                                                                                                            [...]
+| datafusion.optimizer.hash_join_single_partition_threshold  | 1048576    | The maximum estimated size in bytes for one input side of a HashJoin will be collected into a single partition                                                                                                                                                                                                                                                                                                                   [...]
+| datafusion.explain.logical_plan_only                       | false      | When set to true, the explain statement will only print logical plans                                                                                                                                                                                                                                                                                                                                                            [...]
+| datafusion.explain.physical_plan_only                      | false      | When set to true, the explain statement will only print physical plans                                                                                                                                                                                                                                                                                                                                                           [...]
+| datafusion.sql_parser.parse_float_as_decimal               | false      | When set to true, SQL parser will parse float as decimal type                                                                                                                                                                                                                                                                                                                                                                    [...]
+| datafusion.sql_parser.enable_ident_normalization           | true       | When set to true, SQL parser will normalize ident (convert ident to lowercase when not quoted)                                                                                                                                                                                                                                                                                                                                   [...]
+| datafusion.sql_parser.dialect                              | generic    | Configure the SQL dialect used by DataFusion's parser; supported values include: Generic, MySQL, PostgreSQL, Hive, SQLite, Snowflake, Redshift, MsSQL, ClickHouse, BigQuery, and Ansi.                                                                                                                                                                                                                                           [...]
diff --git a/_sources/user-guide/sql/data_types.md.txt b/_sources/user-guide/sql/data_types.md.txt
index 063976dc3d..1d3455abc2 100644
--- a/_sources/user-guide/sql/data_types.md.txt
+++ b/_sources/user-guide/sql/data_types.md.txt
@@ -38,7 +38,7 @@ the `arrow_typeof` function. For example:
 ```
 
 You can cast a SQL expression to a specific Arrow type using the `arrow_cast` function
-For example, to cast the output of `now()` to a `Timestamp` with second precision rather:
+For example, to cast the output of `now()` to a `Timestamp` with second precision:
 
 ```sql
 ❯ select arrow_cast(now(), 'Timestamp(Second, None)');
diff --git a/contributor-guide/architecture.html b/contributor-guide/architecture.html
index eeedfcc5be..507b292def 100644
--- a/contributor-guide/architecture.html
+++ b/contributor-guide/architecture.html
@@ -366,7 +366,7 @@ possible. You can find the most up to date version in the <a class="reference ex
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/communication.html b/contributor-guide/communication.html
index 82f751ab14..aaf1211277 100644
--- a/contributor-guide/communication.html
+++ b/contributor-guide/communication.html
@@ -437,7 +437,7 @@ for the video call link, add topics and to see what others plan to discuss.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/index.html b/contributor-guide/index.html
index f055d1a85a..21821d538f 100644
--- a/contributor-guide/index.html
+++ b/contributor-guide/index.html
@@ -734,7 +734,7 @@ new specifications as you see fit.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/quarterly_roadmap.html b/contributor-guide/quarterly_roadmap.html
index 99afae3b27..6e2af0849a 100644
--- a/contributor-guide/quarterly_roadmap.html
+++ b/contributor-guide/quarterly_roadmap.html
@@ -529,7 +529,7 @@
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/roadmap.html b/contributor-guide/roadmap.html
index 1dc87d5410..b6f7fca58f 100644
--- a/contributor-guide/roadmap.html
+++ b/contributor-guide/roadmap.html
@@ -546,7 +546,7 @@ be smaller until execution time.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/specification/index.html b/contributor-guide/specification/index.html
index 6da37abec0..561f00e24d 100644
--- a/contributor-guide/specification/index.html
+++ b/contributor-guide/specification/index.html
@@ -351,7 +351,7 @@
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/specification/invariants.html b/contributor-guide/specification/invariants.html
index 10c4ad7e21..b3a492f8d6 100644
--- a/contributor-guide/specification/invariants.html
+++ b/contributor-guide/specification/invariants.html
@@ -868,7 +868,7 @@ schemas.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/specification/output-field-name-semantic.html b/contributor-guide/specification/output-field-name-semantic.html
index b760a5c32e..87548c089a 100644
--- a/contributor-guide/specification/output-field-name-semantic.html
+++ b/contributor-guide/specification/output-field-name-semantic.html
@@ -786,7 +786,7 @@ DataFusion queries planned from both SQL queries and Dataframe APIs.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/genindex.html b/genindex.html
index d8c5b04111..cb1fb7383a 100644
--- a/genindex.html
+++ b/genindex.html
@@ -310,7 +310,7 @@
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/index.html b/index.html
index ace01aeabe..9429ec842e 100644
--- a/index.html
+++ b/index.html
@@ -380,7 +380,7 @@ community.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/search.html b/search.html
index 70607cbcd3..a07ff477d0 100644
--- a/search.html
+++ b/search.html
@@ -339,7 +339,7 @@
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/searchindex.js b/searchindex.js
index 5c1e800c0b..cb770d9d35 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"docnames": ["contributor-guide/architecture", "contributor-guide/communication", "contributor-guide/index", "contributor-guide/quarterly_roadmap", "contributor-guide/roadmap", "contributor-guide/specification/index", "contributor-guide/specification/invariants", "contributor-guide/specification/output-field-name-semantic", "index", "user-guide/cli", "user-guide/configs", "user-guide/dataframe", "user-guide/example-usage", "user-guide/expressions", "user-guide/faq", "use [...]
\ No newline at end of file
+Search.setIndex({"docnames": ["contributor-guide/architecture", "contributor-guide/communication", "contributor-guide/index", "contributor-guide/quarterly_roadmap", "contributor-guide/roadmap", "contributor-guide/specification/index", "contributor-guide/specification/invariants", "contributor-guide/specification/output-field-name-semantic", "index", "user-guide/cli", "user-guide/configs", "user-guide/dataframe", "user-guide/example-usage", "user-guide/expressions", "user-guide/faq", "use [...]
\ No newline at end of file
diff --git a/user-guide/cli.html b/user-guide/cli.html
index dc1363f4b4..6889bc0632 100644
--- a/user-guide/cli.html
+++ b/user-guide/cli.html
@@ -822,7 +822,7 @@ DataFusion<span class="w"> </span>CLI<span class="w"> </span>v13.0.0
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/configs.html b/user-guide/configs.html
index 82c2381f1d..ad95ed140d 100644
--- a/user-guide/configs.html
+++ b/user-guide/configs.html
@@ -414,79 +414,83 @@ Environment variables are read during <code class="docutils literal notranslate"
 <td><p>false</p></td>
 <td><p>If true, filter expressions evaluated during the parquet decoding operation will be reordered heuristically to minimize the cost of evaluation. If false, the filters are applied in the same order as written in the query</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.optimizer.enable_round_robin_repartition</p></td>
+<tr class="row-even"><td><p>datafusion.execution.aggregate.scalar_update_factor</p></td>
+<td><p>10</p></td>
+<td><p>Specifies the threshold for using <code class="docutils literal notranslate"><span class="pre">ScalarValue</span></code>s to update accumulators during high-cardinality aggregations for each input batch. The aggregation is considered high-cardinality if the number of affected groups is greater than or equal to <code class="docutils literal notranslate"><span class="pre">batch_size</span> <span class="pre">/</span> <span class="pre">scalar_update_factor</span></code>. In such cases [...]
+</tr>
+<tr class="row-odd"><td><p>datafusion.optimizer.enable_round_robin_repartition</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, the physical plan optimizer will try to add round robin repartitioning to increase parallelism to leverage more CPU cores</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.filter_null_join_keys</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.filter_null_join_keys</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, the optimizer will insert filters before a join between a nullable and non-nullable column to filter out nulls on the nullable side. This filter can add additional overhead when the file format does not fully support predicate push down.</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.optimizer.repartition_aggregations</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.repartition_aggregations</p></td>
 <td><p>true</p></td>
 <td><p>Should DataFusion repartition data using the aggregate keys to execute aggregates in parallel using the provided <code class="docutils literal notranslate"><span class="pre">target_partitions</span></code> level</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.repartition_file_min_size</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.repartition_file_min_size</p></td>
 <td><p>10485760</p></td>
 <td><p>Minimum total files size in bytes to perform file scan repartitioning.</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.optimizer.repartition_joins</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.repartition_joins</p></td>
 <td><p>true</p></td>
 <td><p>Should DataFusion repartition data using the join keys to execute joins in parallel using the provided <code class="docutils literal notranslate"><span class="pre">target_partitions</span></code> level</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.allow_symmetric_joins_without_pruning</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.allow_symmetric_joins_without_pruning</p></td>
 <td><p>true</p></td>
 <td><p>Should DataFusion allow symmetric hash joins for unbounded data sources even when its inputs do not have any ordering or filtering If the flag is not enabled, the SymmetricHashJoin operator will be unable to prune its internal buffers, resulting in certain join types - such as Full, Left, LeftAnti, LeftSemi, Right, RightAnti, and RightSemi - being produced only at the end of the execution. This is not typical in stream processing. Additionally, without proper design for long runne [...]
 </tr>
-<tr class="row-even"><td><p>datafusion.optimizer.repartition_file_scans</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.repartition_file_scans</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, file groups will be repartitioned to achieve maximum parallelism. Currently supported only for Parquet format in which case multiple row groups from the same file may be read concurrently. If false then each row group is read serially, though different files may be read in parallel.</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.repartition_windows</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.repartition_windows</p></td>
 <td><p>true</p></td>
 <td><p>Should DataFusion repartition data using the partitions keys to execute window functions in parallel using the provided <code class="docutils literal notranslate"><span class="pre">target_partitions</span></code> level</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.optimizer.repartition_sorts</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.repartition_sorts</p></td>
 <td><p>true</p></td>
 <td><p>Should DataFusion execute sorts in a per-partition fashion and merge afterwards instead of coalescing first and sorting globally. With this flag is enabled, plans in the form below <code class="docutils literal notranslate"><span class="pre">text</span> <span class="pre">&quot;SortExec:</span> <span class="pre">[a&#64;0</span> <span class="pre">ASC]&quot;,</span> <span class="pre">&quot;</span> <span class="pre">CoalescePartitionsExec&quot;,</span> <span class="pre">&quot;</span>  [...]
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.skip_failed_rules</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.skip_failed_rules</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, the logical plan optimizer will produce warning messages if any optimization rules produce errors and then proceed to the next rule. When set to false, any rules that produce errors will cause the query to fail</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.optimizer.max_passes</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.max_passes</p></td>
 <td><p>3</p></td>
 <td><p>Number of times that the optimizer will attempt to optimize the plan</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.top_down_join_key_reordering</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.top_down_join_key_reordering</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, the physical plan optimizer will run a top down process to reorder the join keys</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.optimizer.prefer_hash_join</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.prefer_hash_join</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, the physical plan optimizer will prefer HashJoin over SortMergeJoin. HashJoin can work more efficiently than SortMergeJoin but consumes more memory</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.hash_join_single_partition_threshold</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.hash_join_single_partition_threshold</p></td>
 <td><p>1048576</p></td>
 <td><p>The maximum estimated size in bytes for one input side of a HashJoin will be collected into a single partition</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.explain.logical_plan_only</p></td>
+<tr class="row-odd"><td><p>datafusion.explain.logical_plan_only</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, the explain statement will only print logical plans</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.explain.physical_plan_only</p></td>
+<tr class="row-even"><td><p>datafusion.explain.physical_plan_only</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, the explain statement will only print physical plans</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.sql_parser.parse_float_as_decimal</p></td>
+<tr class="row-odd"><td><p>datafusion.sql_parser.parse_float_as_decimal</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, SQL parser will parse float as decimal type</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.sql_parser.enable_ident_normalization</p></td>
+<tr class="row-even"><td><p>datafusion.sql_parser.enable_ident_normalization</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, SQL parser will normalize ident (convert ident to lowercase when not quoted)</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.sql_parser.dialect</p></td>
+<tr class="row-odd"><td><p>datafusion.sql_parser.dialect</p></td>
 <td><p>generic</p></td>
 <td><p>Configure the SQL dialect used by DataFusion’s parser; supported values include: Generic, MySQL, PostgreSQL, Hive, SQLite, Snowflake, Redshift, MsSQL, ClickHouse, BigQuery, and Ansi.</p></td>
 </tr>
@@ -534,7 +538,7 @@ Environment variables are read during <code class="docutils literal notranslate"
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/dataframe.html b/user-guide/dataframe.html
index fa1e5d906e..d04f51098e 100644
--- a/user-guide/dataframe.html
+++ b/user-guide/dataframe.html
@@ -541,7 +541,7 @@ execution. The plan is evaluated (executed) when an action method is invoked, su
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/example-usage.html b/user-guide/example-usage.html
index 13400ff4f1..3842cc8b64 100644
--- a/user-guide/example-usage.html
+++ b/user-guide/example-usage.html
@@ -600,7 +600,7 @@ with <code class="docutils literal notranslate"><span class="pre">native</span><
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/expressions.html b/user-guide/expressions.html
index 5f828bee94..8a346de27c 100644
--- a/user-guide/expressions.html
+++ b/user-guide/expressions.html
@@ -1020,7 +1020,7 @@ expressions such as <code class="docutils literal notranslate"><span class="pre"
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/faq.html b/user-guide/faq.html
index 87d8b6f21e..93fbba58ad 100644
--- a/user-guide/faq.html
+++ b/user-guide/faq.html
@@ -432,7 +432,7 @@ targets end-users rather than developers of other database systems.</p></li>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/introduction.html b/user-guide/introduction.html
index f0351d3e38..006b815824 100644
--- a/user-guide/introduction.html
+++ b/user-guide/introduction.html
@@ -509,7 +509,7 @@ provide integrations with other systems.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/sql/aggregate_functions.html b/user-guide/sql/aggregate_functions.html
index 07a4377eb4..071e21f1fe 100644
--- a/user-guide/sql/aggregate_functions.html
+++ b/user-guide/sql/aggregate_functions.html
@@ -1104,7 +1104,7 @@ Can be a constant, column, or function, and any combination of arithmetic operat
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/sql/data_types.html b/user-guide/sql/data_types.html
index a992a24f4f..cc9ef53fb3 100644
--- a/user-guide/sql/data_types.html
+++ b/user-guide/sql/data_types.html
@@ -377,7 +377,7 @@ the <code class="docutils literal notranslate"><span class="pre">arrow_typeof</s
 </pre></div>
 </div>
 <p>You can cast a SQL expression to a specific Arrow type using the <code class="docutils literal notranslate"><span class="pre">arrow_cast</span></code> function
-For example, to cast the output of <code class="docutils literal notranslate"><span class="pre">now()</span></code> to a <code class="docutils literal notranslate"><span class="pre">Timestamp</span></code> with second precision rather:</p>
+For example, to cast the output of <code class="docutils literal notranslate"><span class="pre">now()</span></code> to a <code class="docutils literal notranslate"><span class="pre">Timestamp</span></code> with second precision:</p>
 <div class="highlight-sql notranslate"><div class="highlight"><pre><span></span>❯ select arrow_cast(now(), &#39;Timestamp(Second, None)&#39;);
 +---------------------+
 | now()               |
@@ -693,7 +693,7 @@ For example, to cast the output of <code class="docutils literal notranslate"><s
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/sql/ddl.html b/user-guide/sql/ddl.html
index 7431e38933..dd258342b7 100644
--- a/user-guide/sql/ddl.html
+++ b/user-guide/sql/ddl.html
@@ -586,7 +586,7 @@ DROP VIEW [ IF EXISTS ] <b><i>view_name</i></b>;
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/sql/explain.html b/user-guide/sql/explain.html
index a485944128..de36031040 100644
--- a/user-guide/sql/explain.html
+++ b/user-guide/sql/explain.html
@@ -425,7 +425,7 @@ If you need more information output, use <code class="docutils literal notransla
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/sql/index.html b/user-guide/sql/index.html
index b08e8c1bc3..922edc3e0c 100644
--- a/user-guide/sql/index.html
+++ b/user-guide/sql/index.html
@@ -418,7 +418,7 @@
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/sql/information_schema.html b/user-guide/sql/information_schema.html
index 2516ac4fdb..8973dcf9e3 100644
--- a/user-guide/sql/information_schema.html
+++ b/user-guide/sql/information_schema.html
@@ -407,7 +407,7 @@ views of the ISO SQL <code class="docutils literal notranslate"><span class="pre
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/sql/scalar_functions.html b/user-guide/sql/scalar_functions.html
index e83ca78f70..fb8360dc93 100644
--- a/user-guide/sql/scalar_functions.html
+++ b/user-guide/sql/scalar_functions.html
@@ -3352,7 +3352,7 @@ string operators.</p></li>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/sql/select.html b/user-guide/sql/select.html
index 822d9a5efd..2e5141dba6 100644
--- a/user-guide/sql/select.html
+++ b/user-guide/sql/select.html
@@ -645,7 +645,7 @@ This order can be changed to descending by adding <code class="docutils literal
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/sql/sql_status.html b/user-guide/sql/sql_status.html
index e1ba850665..1f92a52d17 100644
--- a/user-guide/sql/sql_status.html
+++ b/user-guide/sql/sql_status.html
@@ -534,7 +534,7 @@
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>
     
diff --git a/user-guide/sql/subqueries.html b/user-guide/sql/subqueries.html
index 088fee2055..985da3d579 100644
--- a/user-guide/sql/subqueries.html
+++ b/user-guide/sql/subqueries.html
@@ -463,7 +463,7 @@ is an example of a filter using a scalar subquery. Only correlated subqueries ar
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.1.3.<br>
+Created using <a href="http://sphinx-doc.org/">Sphinx</a> 6.2.1.<br>
 </p>
     </div>