You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by ka...@apache.org on 2020/12/04 21:57:34 UTC

[spark] branch branch-3.1 updated: [SPARK-33660][DOCS][SS] Fix Kafka Headers Documentation

This is an automated email from the ASF dual-hosted git repository.

kabhwan pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
     new 3448a1f  [SPARK-33660][DOCS][SS] Fix Kafka Headers Documentation
3448a1f is described below

commit 3448a1f582a177b60d7467dcc004da1ab1dfb5da
Author: german <ge...@gmail.com>
AuthorDate: Sat Dec 5 06:51:54 2020 +0900

    [SPARK-33660][DOCS][SS] Fix Kafka Headers Documentation
    
    ### What changes were proposed in this pull request?
    
    Update kafka headers documentation, type is not longer a map but an array
    
    [jira](https://issues.apache.org/jira/browse/SPARK-33660)
    
    ### Why are the changes needed?
    To help users
    
    ### Does this PR introduce _any_ user-facing change?
    no
    
    ### How was this patch tested?
    
    It is only documentation
    
    Closes #30605 from Gschiavon/SPARK-33660-fix-kafka-headers-documentation.
    
    Authored-by: german <ge...@gmail.com>
    Signed-off-by: Jungtaek Lim (HeartSaVioR) <ka...@gmail.com>
    (cherry picked from commit d671e053e9806d6b4e43a39f5018aa9718790160)
    Signed-off-by: Jungtaek Lim (HeartSaVioR) <ka...@gmail.com>
---
 docs/structured-streaming-kafka-integration.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/structured-streaming-kafka-integration.md b/docs/structured-streaming-kafka-integration.md
index f92dd03..5336695 100644
--- a/docs/structured-streaming-kafka-integration.md
+++ b/docs/structured-streaming-kafka-integration.md
@@ -61,7 +61,7 @@ val df = spark
   .option("includeHeaders", "true")
   .load()
 df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)", "headers")
-  .as[(String, String, Map)]
+  .as[(String, String, Array[(String, Array[Byte])])]
 
 // Subscribe to multiple topics
 val df = spark


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org