You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by xu...@apache.org on 2022/05/23 22:45:12 UTC
[hudi] branch asf-site updated: [MINOR][DOCS] Minor fix on 0.9.0 and 0.11.0 release notes (#5657)
This is an automated email from the ASF dual-hosted git repository.
xushiyan pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hudi.git
The following commit(s) were added to refs/heads/asf-site by this push:
new a4bc860b43 [MINOR][DOCS] Minor fix on 0.9.0 and 0.11.0 release notes (#5657)
a4bc860b43 is described below
commit a4bc860b4378bedd7dc2783977a7397f028c46ee
Author: Raymond Xu <27...@users.noreply.github.com>
AuthorDate: Mon May 23 15:45:06 2022 -0700
[MINOR][DOCS] Minor fix on 0.9.0 and 0.11.0 release notes (#5657)
---
website/releases/release-0.11.0.md | 7 ++++---
website/releases/release-0.9.0.md | 2 ++
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/website/releases/release-0.11.0.md b/website/releases/release-0.11.0.md
index 18eca96dd8..2d1ed27b1c 100644
--- a/website/releases/release-0.11.0.md
+++ b/website/releases/release-0.11.0.md
@@ -101,7 +101,8 @@ time. Spark SQL DDL support (experimental) was added for Spark 3.1.x and Spark 3
### Slim Utilities Bundle
In 0.11.0, a new `hudi-utilities-slim-bundle` is added to exclude dependencies that could cause conflicts and
-compatibility issues with other frameworks such as Spark.
+compatibility issues with other frameworks such as Spark. `hudi-utilities-slim-bundle` is to work with a chosen Spark
+bundle:
- `hudi-utilities-slim-bundle` works with Spark 3.1 and 2.4.
- `hudi-utilities-bundle` continues to work with Spark 3.1 as it does in Hudi 0.10.x.
@@ -170,7 +171,7 @@ added support for MOR tables.
### Pulsar Write Commit Callback
Hudi users can use `org.apache.hudi.callback.HoodieWriteCommitCallback` to invoke callback function upon successful
-commits. In 0.11.0, we add`HoodieWriteCommitPulsarCallback` in addition to the existing HTTP callback and Kafka
+commits. In 0.11.0, we add `HoodieWriteCommitPulsarCallback` in addition to the existing HTTP callback and Kafka
callback. Please refer to the [configurations page](/docs/configurations#Write-commit-pulsar-callback-configs) for
detailed settings.
@@ -196,7 +197,7 @@ tables. This is useful when tailing Hive tables in `HoodieDeltaStreamer` instead
with [BigQuery integration](/docs/gcp_bigquery).
- For Spark readers that rely on extracting physical partition path,
set `hoodie.datasource.read.extract.partition.values.from.path=true` to stay compatible with existing behaviors.
-- Default index type for Spark was change from `BLOOM`
+- Default index type for Spark was changed from `BLOOM`
to `SIMPLE` ([HUDI-3091](https://issues.apache.org/jira/browse/HUDI-3091)). If you currently rely on the default `BLOOM`
index type, please update your configuration accordingly.
diff --git a/website/releases/release-0.9.0.md b/website/releases/release-0.9.0.md
index 9837230322..aebc32c595 100644
--- a/website/releases/release-0.9.0.md
+++ b/website/releases/release-0.9.0.md
@@ -22,6 +22,8 @@ last_modified_at: 2021-08-26T08:40:00-07:00
support the older configs string variables, users are encouraged to use the new `ConfigProperty` equivalents, as noted
in the deprecation notices. In most cases, it is as simple as calling `.key()` and `.defaultValue()` on the corresponding
alternative. e.g `RECORDKEY_FIELD_OPT_KEY` can be replaced by `RECORDKEY_FIELD_NAME.key()`
+- If set `URL_ENCODE_PARTITIONING_OPT_KEY=true` *and* `<` and `>` are present in the URL partition paths, users would
+ need to migrate the table due to encoding logic changed: `<` (encoded as `%3C`) and `>` (encoded as `%3E`) won't be escaped in 0.9.0.
## Release Highlights