You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2020/10/20 15:44:11 UTC

[spark] branch branch-3.0 updated: [MINOR][DOCS] Fix the description about to_avro and from_avro functions

This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new 4373c71  [MINOR][DOCS] Fix the description about to_avro and from_avro functions
4373c71 is described below

commit 4373c71632abfa2a69d36909a49673cbaa050865
Author: Keiji Yoshida <kj...@gmail.com>
AuthorDate: Wed Oct 21 00:36:45 2020 +0900

    [MINOR][DOCS] Fix the description about to_avro and from_avro functions
    
    ### What changes were proposed in this pull request?
    This pull request changes the description about `to_avro` and `from_avro` functions to include Python as a supported language as the functions have been supported in Python since Apache Spark 3.0.0 [[SPARK-26856](https://issues.apache.org/jira/browse/SPARK-26856)].
    
    ### Why are the changes needed?
    Same as above.
    
    ### Does this PR introduce _any_ user-facing change?
    Yes. The description changed by this pull request is on https://spark.apache.org/docs/latest/sql-data-sources-avro.html#to_avro-and-from_avro.
    
    ### How was this patch tested?
    Tested manually by building and checking the document in the local environment.
    
    Closes #30105 from kjmrknsn/fix-docs-sql-data-sources-avro.
    
    Authored-by: Keiji Yoshida <kj...@gmail.com>
    Signed-off-by: HyukjinKwon <gu...@apache.org>
    (cherry picked from commit 46ad325e56abd95c0ffdbe64aad78582da8c725d)
    Signed-off-by: HyukjinKwon <gu...@apache.org>
---
 docs/sql-data-sources-avro.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/sql-data-sources-avro.md b/docs/sql-data-sources-avro.md
index d926ae7..69b165e 100644
--- a/docs/sql-data-sources-avro.md
+++ b/docs/sql-data-sources-avro.md
@@ -88,7 +88,7 @@ Kafka key-value record will be augmented with some metadata, such as the ingesti
 * If the "value" field that contains your data is in Avro, you could use `from_avro()` to extract your data, enrich it, clean it, and then push it downstream to Kafka again or write it out to a file.
 * `to_avro()` can be used to turn structs into Avro records. This method is particularly useful when you would like to re-encode multiple columns into a single one when writing data out to Kafka.
 
-Both functions are currently only available in Scala and Java.
+Both functions are currently only available in Scala, Java, and Python.
 
 <div class="codetabs">
 <div data-lang="scala" markdown="1">


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org