You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by zjffdu <gi...@git.apache.org> on 2017/03/01 02:38:46 UTC

[GitHub] spark pull request #10307: [SPARK-12334][SQL][PYSPARK] Support read from mul...

Github user zjffdu commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10307#discussion_r103600310
  
    --- Diff: python/pyspark/sql/readwriter.py ---
    @@ -388,16 +388,18 @@ def csv(self, path, schema=None, sep=None, encoding=None, quote=None, escape=Non
             return self._df(self._jreader.csv(self._spark._sc._jvm.PythonUtils.toSeq(path)))
     
         @since(1.5)
    -    def orc(self, path):
    -        """Loads an ORC file, returning the result as a :class:`DataFrame`.
    +    def orc(self, paths):
    --- End diff --
    
    Good catch, I should not break the compatibility.  BTW,  I found that `DataFrameReader.parquet` use variable length argument which is not consistent with other file formats such as text, json and orc that use string or a list of string.  I can fix this in this PR or can do it in another PR to make them consistent. What do you think ?
    
    ```
    @since(1.4)
        def parquet(self, *paths):
            """Loads Parquet files, returning the result as a :class:`DataFrame`.
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org