You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by sr...@apache.org on 2016/08/29 12:12:17 UTC

spark git commit: fixed a typo

Repository: spark
Updated Branches:
  refs/heads/master 1a48c0047 -> 08913ce00


fixed a typo

idempotant -> idempotent

Author: Seigneurin, Alexis (CONT) <Al...@capitalone.com>

Closes #14833 from aseigneurin/fix-typo.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/08913ce0
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/08913ce0
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/08913ce0

Branch: refs/heads/master
Commit: 08913ce0002a80a989489a31b7353f5ec4a5849f
Parents: 1a48c00
Author: Seigneurin, Alexis (CONT) <Al...@capitalone.com>
Authored: Mon Aug 29 13:12:10 2016 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Mon Aug 29 13:12:10 2016 +0100

----------------------------------------------------------------------
 docs/structured-streaming-programming-guide.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/08913ce0/docs/structured-streaming-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/structured-streaming-programming-guide.md b/docs/structured-streaming-programming-guide.md
index 090b14f..8a88e06 100644
--- a/docs/structured-streaming-programming-guide.md
+++ b/docs/structured-streaming-programming-guide.md
@@ -406,7 +406,7 @@ Furthermore, this model naturally handles data that has arrived later than expec
 
 ## Fault Tolerance Semantics
 Delivering end-to-end exactly-once semantics was one of key goals behind the design of Structured Streaming. To achieve that, we have designed the Structured Streaming sources, the sinks and the execution engine to reliably track the exact progress of the processing so that it can handle any kind of failure by restarting and/or reprocessing. Every streaming source is assumed to have offsets (similar to Kafka offsets, or Kinesis sequence numbers)
-to track the read position in the stream. The engine uses checkpointing and write ahead logs to record the offset range of the data being processed in each trigger. The streaming sinks are designed to be idempotent for handling reprocessing. Together, using replayable sources and idempotant sinks, Structured Streaming can ensure **end-to-end exactly-once semantics** under any failure.
+to track the read position in the stream. The engine uses checkpointing and write ahead logs to record the offset range of the data being processed in each trigger. The streaming sinks are designed to be idempotent for handling reprocessing. Together, using replayable sources and idempotent sinks, Structured Streaming can ensure **end-to-end exactly-once semantics** under any failure.
 
 # API using Datasets and DataFrames
 Since Spark 2.0, DataFrames and Datasets can represent static, bounded data, as well as streaming, unbounded data. Similar to static Datasets/DataFrames, you can use the common entry point `SparkSession` ([Scala](api/scala/index.html#org.apache.spark.sql.SparkSession)/


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org