You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by we...@apache.org on 2019/04/04 02:31:50 UTC
[spark] branch master updated: [SPARK-26811][SQL][FOLLOWUP] fix
some documentation
This is an automated email from the ASF dual-hosted git repository.
wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 5c50f68 [SPARK-26811][SQL][FOLLOWUP] fix some documentation
5c50f68 is described below
commit 5c50f682539b4af04b6536e75214bedd7187574b
Author: Wenchen Fan <we...@databricks.com>
AuthorDate: Thu Apr 4 10:31:27 2019 +0800
[SPARK-26811][SQL][FOLLOWUP] fix some documentation
## What changes were proposed in this pull request?
It's a followup of https://github.com/apache/spark/pull/24012 , to fix 2 documentation:
1. `SupportsRead` and `SupportsWrite` are not internal anymore. They are public interfaces now.
2. `Scan` should link the `BATCH_READ` instead of hardcoding it.
## How was this patch tested?
N/A
Closes #24285 from cloud-fan/doc.
Authored-by: Wenchen Fan <we...@databricks.com>
Signed-off-by: Wenchen Fan <we...@databricks.com>
---
.../main/java/org/apache/spark/sql/sources/v2/SupportsRead.java | 2 +-
.../main/java/org/apache/spark/sql/sources/v2/SupportsWrite.java | 2 +-
.../main/java/org/apache/spark/sql/sources/v2/reader/Scan.java | 8 +++++---
3 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/sql/core/src/main/java/org/apache/spark/sql/sources/v2/SupportsRead.java b/sql/core/src/main/java/org/apache/spark/sql/sources/v2/SupportsRead.java
index 67fc72e..826fa2f 100644
--- a/sql/core/src/main/java/org/apache/spark/sql/sources/v2/SupportsRead.java
+++ b/sql/core/src/main/java/org/apache/spark/sql/sources/v2/SupportsRead.java
@@ -22,7 +22,7 @@ import org.apache.spark.sql.sources.v2.reader.ScanBuilder;
import org.apache.spark.sql.util.CaseInsensitiveStringMap;
/**
- * An internal base interface of mix-in interfaces for readable {@link Table}. This adds
+ * A mix-in interface of {@link Table}, to indicate that it's readable. This adds
* {@link #newScanBuilder(CaseInsensitiveStringMap)} that is used to create a scan for batch,
* micro-batch, or continuous processing.
*/
diff --git a/sql/core/src/main/java/org/apache/spark/sql/sources/v2/SupportsWrite.java b/sql/core/src/main/java/org/apache/spark/sql/sources/v2/SupportsWrite.java
index b215963..c52e545 100644
--- a/sql/core/src/main/java/org/apache/spark/sql/sources/v2/SupportsWrite.java
+++ b/sql/core/src/main/java/org/apache/spark/sql/sources/v2/SupportsWrite.java
@@ -22,7 +22,7 @@ import org.apache.spark.sql.sources.v2.writer.WriteBuilder;
import org.apache.spark.sql.util.CaseInsensitiveStringMap;
/**
- * An internal base interface of mix-in interfaces for writable {@link Table}. This adds
+ * A mix-in interface of {@link Table}, to indicate that it's writable. This adds
* {@link #newWriteBuilder(CaseInsensitiveStringMap)} that is used to create a write
* for batch or streaming.
*/
diff --git a/sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/Scan.java b/sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/Scan.java
index e97d054..7633d50 100644
--- a/sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/Scan.java
+++ b/sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/Scan.java
@@ -24,6 +24,7 @@ import org.apache.spark.sql.types.StructType;
import org.apache.spark.sql.sources.v2.SupportsContinuousRead;
import org.apache.spark.sql.sources.v2.SupportsMicroBatchRead;
import org.apache.spark.sql.sources.v2.Table;
+import org.apache.spark.sql.sources.v2.TableCapability;
/**
* A logical representation of a data source scan. This interface is used to provide logical
@@ -32,8 +33,8 @@ import org.apache.spark.sql.sources.v2.Table;
* This logical representation is shared between batch scan, micro-batch streaming scan and
* continuous streaming scan. Data sources must implement the corresponding methods in this
* interface, to match what the table promises to support. For example, {@link #toBatch()} must be
- * implemented, if the {@link Table} that creates this {@link Scan} returns BATCH_READ support in
- * its {@link Table#capabilities()}.
+ * implemented, if the {@link Table} that creates this {@link Scan} returns
+ * {@link TableCapability#BATCH_READ} support in its {@link Table#capabilities()}.
* </p>
*/
@Evolving
@@ -61,7 +62,8 @@ public interface Scan {
/**
* Returns the physical representation of this scan for batch query. By default this method throws
* exception, data sources must overwrite this method to provide an implementation, if the
- * {@link Table} that creates this returns batch read support in its {@link Table#capabilities()}.
+ * {@link Table} that creates this scan returns {@link TableCapability#BATCH_READ} in its
+ * {@link Table#capabilities()}.
*
* @throws UnsupportedOperationException
*/
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org