You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2019/10/16 01:28:22 UTC

[GitHub] [incubator-hudi] leesf commented on a change in pull request #958: [HUDI-273] Translate the Writing Data page into Chinese documentation

leesf commented on a change in pull request #958: [HUDI-273] Translate the Writing Data page into Chinese documentation
URL: https://github.com/apache/incubator-hudi/pull/958#discussion_r335243986
 
 

 ##########
 File path: docs/writing_data.cn.md
 ##########
 @@ -1,44 +1,43 @@
 ---
-title: Writing Hudi Datasets
+title: 写入 Hudi 数据集
 keywords: hudi, incremental, batch, stream, processing, Hive, ETL, Spark SQL
 sidebar: mydoc_sidebar
 permalink: writing_data.html
 toc: false
-summary: In this page, we will discuss some available tools for incrementally ingesting & storing data.
+summary: 这一页里,我们将讨论一些可用的工具,这些工具可用于增量摄取和存储数据。
 ---
 
-In this section, we will cover ways to ingest new changes from external sources or even other Hudi datasets using the [DeltaStreamer](#deltastreamer) tool, as well as 
-speeding up large Spark jobs via upserts using the [Hudi datasource](#datasource-writer). Such datasets can then be [queried](querying_data.html) using various query engines.
+这一节我们将介绍使用[DeltaStreamer](#deltastreamer)工具从外部源甚至其他Hudi数据集摄取新更改的方法,
+以及使用[Hudi数据源](#datasource-writer)通过upserts加快大型Spark作业的方法。
 
 Review comment:
   以及使用[Hudi数据源](#datasource-writer)通过upserts加快大型Spark作业的方法 -> 以及通过使用[Hudi数据源](#datasource-writer)的upserts加快大型Spark作业的方法。 would be better?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services