You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by HyukjinKwon <gi...@git.apache.org> on 2017/07/10 17:08:48 UTC

[GitHub] spark pull request #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing ...

GitHub user HyukjinKwon opened a pull request:

    https://github.com/apache/spark/pull/18590

    [SPARK-21365][PYTHON] Deduplicate logics parsing DDL type/schema definition

    ## What changes were proposed in this pull request?
    
    This PR deals with three points as below:
    
    - Reuse existing DDL parser APIs rather than reimplementing within PySpark
    
    - Support DDL formatted string, `field type, field type`.
    
    - Support nested data types as below:
    
      **Before**
      ```
      >>> spark.createDataFrame([[[1]]], "struct<a: struct<b: int>>").show()
      ...
      ValueError: The strcut field string format is: 'field_name:field_type', but got: a: struct<b: int>
      ```
    
      ```
      >>> spark.createDataFrame([[[1]]], "a: struct<b: int>").show()
      ...
      ValueError: The strcut field string format is: 'field_name:field_type', but got: a: struct<b: int>
      ```
    
      ```
      >>> spark.createDataFrame([[[1]]], "a int").show()
      ...
      ValueError: Could not parse datatype: a int
      ```
    
      **After**
      ```
      >>> spark.createDataFrame([[[1]]], "struct<a: struct<b: int>>").show()
      +---+
      |  a|
      +---+
      |[1]|
      +---+
      ```
    
      ```
      >>> spark.createDataFrame([[[1]]], "a: struct<b: int>").show()
      +---+
      |  a|
      +---+
      |[1]|
      +---+
      ```
    
      ```
      >>> spark.createDataFrame([[1]], "a int").show()
      +---+
      |  a|
      +---+
      |  1|
      +---+
      ```
    
    ## How was this patch tested?
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/HyukjinKwon/spark deduplicate-python-ddl

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/18590.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #18590
    
----
commit 3472873768aa6227c3fe15a035efd6ca112f88f9
Author: hyukjinkwon <gu...@gmail.com>
Date:   2017-07-10T17:02:06Z

    Deduplicate logics parsing DDL-like type definition

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    **[Test build #79470 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79470/testReport)** for PR 18590 at commit [`3472873`](https://github.com/apache/spark/commit/3472873768aa6227c3fe15a035efd6ca112f88f9).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    **[Test build #79470 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79470/testReport)** for PR 18590 at commit [`3472873`](https://github.com/apache/spark/commit/3472873768aa6227c3fe15a035efd6ca112f88f9).
     * This patch **fails SparkR unit tests**.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing ...

Posted by holdenk <gi...@git.apache.org>.
Github user holdenk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18590#discussion_r126598613
  
    --- Diff: python/pyspark/sql/functions.py ---
    @@ -2026,16 +2026,26 @@ def __init__(self, func, returnType, name=None):
                     "{0}".format(type(func)))
     
             self.func = func
    -        self.returnType = (
    -            returnType if isinstance(returnType, DataType)
    -            else _parse_datatype_string(returnType))
    +        self._returnType = returnType
             # Stores UserDefinedPythonFunctions jobj, once initialized
    +        self._returnType_placeholder = None
             self._judf_placeholder = None
             self._name = name or (
                 func.__name__ if hasattr(func, '__name__')
                 else func.__class__.__name__)
     
         @property
    +    def returnType(self):
    --- End diff --
    
    We have pretty similar logic bellow, would it make sense to think about if there is a nicer more general way to handle these delayed iniatilization classes?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    **[Test build #79492 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79492/testReport)** for PR 18590 at commit [`9d857e6`](https://github.com/apache/spark/commit/9d857e6db4bdcc0a5d6034d5d6261e4a30664960).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    cc @cloud-fan, @felixcheung and @zero323 who I remember I talked about similar issues few times.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    **[Test build #79481 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79481/testReport)** for PR 18590 at commit [`3472873`](https://github.com/apache/spark/commit/3472873768aa6227c3fe15a035efd6ca112f88f9).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    **[Test build #79481 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79481/testReport)** for PR 18590 at commit [`3472873`](https://github.com/apache/spark/commit/3472873768aa6227c3fe15a035efd6ca112f88f9).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    LGTM, merging to master!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing ...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18590#discussion_r126605703
  
    --- Diff: python/pyspark/sql/types.py ---
    @@ -806,43 +786,43 @@ def _parse_datatype_string(s):
         >>> _parse_datatype_string("blabla") # doctest: +IGNORE_EXCEPTION_DETAIL
         Traceback (most recent call last):
             ...
    -    ValueError:...
    +    ParseException:...
         >>> _parse_datatype_string("a: int,") # doctest: +IGNORE_EXCEPTION_DETAIL
         Traceback (most recent call last):
             ...
    -    ValueError:...
    +    ParseException:...
         >>> _parse_datatype_string("array<int") # doctest: +IGNORE_EXCEPTION_DETAIL
         Traceback (most recent call last):
             ...
    -    ValueError:...
    +    ParseException:...
         >>> _parse_datatype_string("map<int, boolean>>") # doctest: +IGNORE_EXCEPTION_DETAIL
         Traceback (most recent call last):
             ...
    -    ValueError:...
    +    ParseException:...
         """
    -    s = s.strip()
    -    if s.startswith("array<"):
    -        if s[-1] != ">":
    -            raise ValueError("'>' should be the last char, but got: %s" % s)
    -        return ArrayType(_parse_datatype_string(s[6:-1]))
    -    elif s.startswith("map<"):
    -        if s[-1] != ">":
    -            raise ValueError("'>' should be the last char, but got: %s" % s)
    -        parts = _ignore_brackets_split(s[4:-1], ",")
    -        if len(parts) != 2:
    -            raise ValueError("The map type string format is: 'map<key_type,value_type>', " +
    -                             "but got: %s" % s)
    -        kt = _parse_datatype_string(parts[0])
    -        vt = _parse_datatype_string(parts[1])
    -        return MapType(kt, vt)
    -    elif s.startswith("struct<"):
    -        if s[-1] != ">":
    -            raise ValueError("'>' should be the last char, but got: %s" % s)
    -        return _parse_struct_fields_string(s[7:-1])
    -    elif ":" in s:
    -        return _parse_struct_fields_string(s)
    -    else:
    -        return _parse_basic_datatype_string(s)
    +    sc = SparkContext._active_spark_context
    +
    +    def from_ddl_schema(type_str):
    +        return _parse_datatype_json_string(
    +            sc._jvm.org.apache.spark.sql.types.StructType.fromDDL(type_str).json())
    +
    +    def from_ddl_datatype(type_str):
    +        return _parse_datatype_json_string(
    +            sc._jvm.org.apache.spark.sql.api.python.PythonSQLUtils.parseDataType(type_str).json())
    +
    +    try:
    +        # DDL format, "fieldname datatype, fieldname datatype".
    +        return from_ddl_schema(s)
    +    except Exception as e:
    +        try:
    +            # For backwards compatibility, "integer", "struct<fieldname: datatype>" and etc.
    +            return from_ddl_datatype(s)
    +        except:
    +            try:
    +                # For backwards compatibility, "fieldname: datatype, fieldname: datatype" case.
    --- End diff --
    
    I tested few cases but it looks not:
    
    ```scala
    scala> StructType.fromDDL("a struct<a: INT, b: STRING>")
    res5: org.apache.spark.sql.types.StructType = StructType(StructField(a,StructType(StructField(a,IntegerType,true), StructField(b,StringType,true)),true))
    
    scala> StructType.fromDDL("a INT, b STRING")
    res6: org.apache.spark.sql.types.StructType = StructType(StructField(a,IntegerType,true), StructField(b,StringType,true))
    
    scala> StructType.fromDDL("a: INT, b: STRING")
    org.apache.spark.sql.catalyst.parser.ParseException:
    extraneous input ':' expecting ...
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    **[Test build #79492 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79492/testReport)** for PR 18590 at commit [`9d857e6`](https://github.com/apache/spark/commit/9d857e6db4bdcc0a5d6034d5d6261e4a30664960).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing ...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/18590


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79492/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    cc @cloud-fan, @felixcheung and @zero323 who I remember I talked about this few times.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    Test FAILed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79470/
    Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing ...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18590#discussion_r126597140
  
    --- Diff: python/pyspark/sql/types.py ---
    @@ -806,43 +786,43 @@ def _parse_datatype_string(s):
         >>> _parse_datatype_string("blabla") # doctest: +IGNORE_EXCEPTION_DETAIL
         Traceback (most recent call last):
             ...
    -    ValueError:...
    +    ParseException:...
         >>> _parse_datatype_string("a: int,") # doctest: +IGNORE_EXCEPTION_DETAIL
         Traceback (most recent call last):
             ...
    -    ValueError:...
    +    ParseException:...
         >>> _parse_datatype_string("array<int") # doctest: +IGNORE_EXCEPTION_DETAIL
         Traceback (most recent call last):
             ...
    -    ValueError:...
    +    ParseException:...
         >>> _parse_datatype_string("map<int, boolean>>") # doctest: +IGNORE_EXCEPTION_DETAIL
         Traceback (most recent call last):
             ...
    -    ValueError:...
    +    ParseException:...
         """
    -    s = s.strip()
    -    if s.startswith("array<"):
    -        if s[-1] != ">":
    -            raise ValueError("'>' should be the last char, but got: %s" % s)
    -        return ArrayType(_parse_datatype_string(s[6:-1]))
    -    elif s.startswith("map<"):
    -        if s[-1] != ">":
    -            raise ValueError("'>' should be the last char, but got: %s" % s)
    -        parts = _ignore_brackets_split(s[4:-1], ",")
    -        if len(parts) != 2:
    -            raise ValueError("The map type string format is: 'map<key_type,value_type>', " +
    -                             "but got: %s" % s)
    -        kt = _parse_datatype_string(parts[0])
    -        vt = _parse_datatype_string(parts[1])
    -        return MapType(kt, vt)
    -    elif s.startswith("struct<"):
    -        if s[-1] != ">":
    -            raise ValueError("'>' should be the last char, but got: %s" % s)
    -        return _parse_struct_fields_string(s[7:-1])
    -    elif ":" in s:
    -        return _parse_struct_fields_string(s)
    -    else:
    -        return _parse_basic_datatype_string(s)
    +    sc = SparkContext._active_spark_context
    +
    +    def from_ddl_schema(type_str):
    +        return _parse_datatype_json_string(
    +            sc._jvm.org.apache.spark.sql.types.StructType.fromDDL(type_str).json())
    +
    +    def from_ddl_datatype(type_str):
    +        return _parse_datatype_json_string(
    +            sc._jvm.org.apache.spark.sql.api.python.PythonSQLUtils.parseDataType(type_str).json())
    +
    +    try:
    +        # DDL format, "fieldname datatype, fieldname datatype".
    +        return from_ddl_schema(s)
    +    except Exception as e:
    +        try:
    +            # For backwards compatibility, "integer", "struct<fieldname: datatype>" and etc.
    +            return from_ddl_datatype(s)
    +        except:
    +            try:
    +                # For backwards compatibility, "fieldname: datatype, fieldname: datatype" case.
    --- End diff --
    
    won't `fieldname: datatype, fieldname: datatype` be parsed as DDL schema?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by felixcheung <gi...@git.apache.org>.
Github user felixcheung commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    Add
    @gatorsmile 
    @holdenk 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing DDL typ...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/18590
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79481/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing ...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18590#discussion_r126481623
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/python/PythonSQLUtils.scala ---
    @@ -0,0 +1,25 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.sql.api.python
    +
    +import org.apache.spark.sql.catalyst.parser.CatalystSqlParser
    +import org.apache.spark.sql.types.DataType
    +
    +private[sql] object PythonSQLUtils {
    +  def parseDataType(typeText: String): DataType = CatalystSqlParser.parseDataType(typeText)
    --- End diff --
    
    Without this, I should do something like ...
    
    ```
    getattr(getattr(sc._jvm.org.apache.spark.sql.catalyst.parser, "CatalystSqlParser$"), "MODULE$").parseDataType("a")
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing ...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18590#discussion_r126686970
  
    --- Diff: python/pyspark/sql/functions.py ---
    @@ -2026,16 +2026,26 @@ def __init__(self, func, returnType, name=None):
                     "{0}".format(type(func)))
     
             self.func = func
    -        self.returnType = (
    -            returnType if isinstance(returnType, DataType)
    -            else _parse_datatype_string(returnType))
    +        self._returnType = returnType
             # Stores UserDefinedPythonFunctions jobj, once initialized
    +        self._returnType_placeholder = None
             self._judf_placeholder = None
             self._name = name or (
                 func.__name__ if hasattr(func, '__name__')
                 else func.__class__.__name__)
     
         @property
    +    def returnType(self):
    --- End diff --
    
    hmm.. I tried several ways I could think at my best but I could not figure out ... 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #18590: [SPARK-21365][PYTHON] Deduplicate logics parsing ...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18590#discussion_r126482669
  
    --- Diff: python/pyspark/sql/tests.py ---
    @@ -1246,6 +1246,31 @@ def test_struct_type(self):
             with self.assertRaises(TypeError):
                 not_a_field = struct1[9.9]
     
    +    def test_parse_datatype_string(self):
    +        from pyspark.sql.types import _all_atomic_types, _parse_datatype_string
    +        for k, t in _all_atomic_types.items():
    +            if t != NullType:
    --- End diff --
    
    So, if I haven't missed anything, this PR drops the support the type parsing `null`. I guess it is almost seldom that we explicitly set the type with `null`. Also, IIRC, we will support `NullType` via `void` (SPARK-20680) soon as a workaround.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org