You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@paimon.apache.org by lz...@apache.org on 2023/05/25 08:06:03 UTC

[incubator-paimon] 09/20: [doc] Introduce how the read.batch-size option can impact memory consumption when compaction in write-performance page (#1175)

This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-0.4
in repository https://gitbox.apache.org/repos/asf/incubator-paimon.git

commit 7a98447789a6499b767e609e0ac8b7882e747fed
Author: wgcn <10...@qq.com>
AuthorDate: Fri May 19 14:53:08 2023 +0800

    [doc] Introduce how the read.batch-size option can impact memory consumption when compaction in write-performance page (#1175)
---
 docs/content/maintenance/write-performance.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/docs/content/maintenance/write-performance.md b/docs/content/maintenance/write-performance.md
index 3dafff492..86c92e959 100644
--- a/docs/content/maintenance/write-performance.md
+++ b/docs/content/maintenance/write-performance.md
@@ -213,4 +213,5 @@ There are three main places in Paimon writer that takes up memory:
 
 * Writer's memory buffer, shared and preempted by all writers of a single task. This memory value can be adjusted by the `write-buffer-size` table property.
 * Memory consumed when merging several sorted runs for compaction. Can be adjusted by the `num-sorted-run.compaction-trigger` option to change the number of sorted runs to be merged.
+* If the row is very large, reading too many lines of data at once can consume a lot of memory when making a compaction. Reducing the `read.batch-size` option can alleviate the impact of this case.
 * The memory consumed by writing columnar (ORC, Parquet, etc.) file, which is not adjustable.