You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2020/08/05 23:09:36 UTC

[GitHub] [iceberg] kbendick commented on a change in pull request #1261: Spark: [DOC] guide about structured streaming sink for Iceberg

kbendick commented on a change in pull request #1261:
URL: https://github.com/apache/iceberg/pull/1261#discussion_r466053084



##########
File path: site/docs/spark-structured-streaming.md
##########
@@ -0,0 +1,184 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+# Spark Structured Streaming
+
+Iceberg uses Apache Spark's DataSourceV2 API for data source and catalog implementations. Spark DSv2 is an evolving API
+with different levels of support in Spark versions.
+
+As of Spark 3.0, the new API on reading/writing table on table identifier is not yet added on streaming query.
+
+| Feature support                                  | Spark 3.0| Spark 2.4  | Notes                                          |
+|--------------------------------------------------|----------|------------|------------------------------------------------|
+| [DataFrame write](#writing-with-streaming-query) | ✔        | ✔          |                                                |
+
+## Writing with streaming query
+
+To write values from streaming query to Iceberg table, use `DataStreamWriter`:
+
+```scala
+data.writeStream
+    .format("iceberg")
+    .outputMode("append")
+    .trigger(Trigger.ProcessingTime(1, TimeUnit.MINUTES))
+    .option("path", pathToTable)
+    .option("checkpointLocation", checkpointPath)
+    .start()
+```
+
+Iceberg supports below output modes:
+
+* append
+* complete
+
+The table should be created in prior to start the streaming query.
+
+## Maintenance
+
+Streaming queries can create new table versions quickly, which creates lots of table metadata to track those versions.
+Maintaining metadata by tuning the rate of commits, expiring old snapshots, and automatically cleaning up metadata files
+is highly recommended.
+
+### Tune the rate of commits
+
+Having high rate of commits would produce lots of data files, manifests, and snapshots which leads the table hard
+to maintain. We encourage having trigger interval 1 minute at minimum, and increase the interval if you encounter
+issues.

Review comment:
       I agree with @HeartSaVioR that this is essential knowledge on structured streaming. However, if you wanted to add a link to trigger intervals specifically, the `latest` (kept up to date) link would likely be https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#triggers




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org