You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@paimon.apache.org by lz...@apache.org on 2023/03/20 05:30:35 UTC
[incubator-paimon] branch master updated: [docs] Decoupling `Overview` documentation from Flink (#654)
This is an automated email from the ASF dual-hosted git repository.
lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-paimon.git
The following commit(s) were added to refs/heads/master by this push:
new fc7b5911f [docs] Decoupling `Overview` documentation from Flink (#654)
fc7b5911f is described below
commit fc7b5911f69eb6182428db80330f3916f0bcb93a
Author: Shuo Cheng <nj...@gmail.com>
AuthorDate: Mon Mar 20 13:30:30 2023 +0800
[docs] Decoupling `Overview` documentation from Flink (#654)
---
docs/content/concepts/overview.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/docs/content/concepts/overview.md b/docs/content/concepts/overview.md
index 0cfd9e291..fbbd7c11d 100644
--- a/docs/content/concepts/overview.md
+++ b/docs/content/concepts/overview.md
@@ -50,7 +50,7 @@ the LSM tree structure to support a large volume of data updates and high-perfor
## Unified Storage
-There are three types of connectors in Flink SQL.
+For streaming engines like Apache Flink, there are typically three types of connectors:
- Message queue, such as Apache Kafka, it is used in both source and
intermediate stages in this pipeline, to guarantee the latency stay
within seconds.
@@ -61,9 +61,9 @@ There are three types of connectors in Flink SQL.
Paimon provides table abstraction. It is used in a way that
does not differ from the traditional database:
-- In Flink `batch` execution mode, it acts like a Hive table and
+- In `batch` execution mode, it acts like a Hive table and
supports various operations of Batch SQL. Query it to see the
latest snapshot.
-- In Flink `streaming` execution mode, it acts like a message queue.
+- In `streaming` execution mode, it acts like a message queue.
Query it acts like querying a stream changelog from a message queue
where historical data never expires.