You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/10/07 09:17:13 UTC

[GitHub] [spark] zhengruifeng commented on a diff in pull request #38086: [SPARK-40539][CONNECT] Initial DataFrame Read API parity for Spark Connect

zhengruifeng commented on code in PR #38086:
URL: https://github.com/apache/spark/pull/38086#discussion_r989876847


##########
python/pyspark/sql/connect/readwriter.py:
##########
@@ -31,6 +39,104 @@ class DataFrameReader:
 
     def __init__(self, client: "RemoteSparkSession") -> None:
         self._client = client
+        self._path = []
+        self._format = None
+        self._schema = None
+        self._options = {}
+
+    def format(self, source: str) -> "DataFrameReader":
+        """
+        Specifies the input data source format.
+
+        .. versionadded:: 3.4.0
+
+        Parameters
+        ----------
+        source : str
+        string, name of the data source, e.g. 'json', 'parquet'.
+
+        """
+        self._format = source
+        return self
+
+    # TODO(SPARK-40539): support StructType in python client and support schema as StructType.
+    def schema(self, schema: str) -> "DataFrameReader":
+        """
+        Specifies the input schema.
+
+        Some data sources (e.g. JSON) can infer the input schema automatically from data.
+        By specifying the schema here, the underlying data source can skip the schema
+        inference step, and thus speed up data loading.
+
+        .. versionadded:: 3.4.0
+
+        Parameters
+        ----------
+        schema : str
+        a DDL-formatted string
+        (For example ``col0 INT, col1 DOUBLE``).
+
+        """
+        self._schema = schema
+        return self
+
+    def option(self, key: str, value: "OptionalPrimitiveType") -> "DataFrameReader":
+        """
+        Adds an input option for the underlying data source.
+
+        .. versionadded:: 3.4.0
+
+        Parameters
+        ----------
+        key : str
+            The key for the option to set. key string is case-insensitive.
+        value
+            The value for the option to set.
+
+        """
+        self._options[key] = str(value)
+        return self
+
+    def load(
+        self,
+        path: Optional[PathOrPaths] = None,
+        format: Optional[str] = None,
+        schema: Optional[str] = None,
+        **options: "OptionalPrimitiveType",
+    ) -> "DataFrame":
+        """
+        Loads data from a data source and returns it as a :class:`DataFrame`.
+
+        .. versionadded:: 3.4.0
+
+        Parameters
+        ----------
+        path : str or list, optional
+            optional string or a list of string for file-system backed data sources.
+        format : str, optional
+            optional string for format of the data source.
+        schema : str, optional
+            optional DDL-formatted string (For example ``col0 INT, col1 DOUBLE``).
+        **options : dict
+            all other string options
+        """
+        if format is not None:
+            self.format(format)
+        if schema is not None:
+            self.schema(schema)
+        for k in options.keys():
+            self.option(k, options.get(k))
+        if isinstance(path, str):
+            self._path.append(path)
+        elif path is not None:
+            if type(path) != list:

Review Comment:
   ```suggestion
               if not isinstance(path, list):
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org