You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by rdblue <gi...@git.apache.org> on 2018/08/27 20:07:12 UTC

[GitHub] spark pull request #22206: [SPARK-25213][PYTHON] Add project to v2 scans bef...

Github user rdblue commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22206#discussion_r213098202
  
    --- Diff: python/pyspark/sql/tests.py ---
    @@ -6394,6 +6394,17 @@ def test_invalid_args(self):
                     df.withColumn('mean_v', mean_udf(df['v']).over(ow))
     
     
    +class DataSourceV2Tests(ReusedSQLTestCase):
    +    def test_pyspark_udf_SPARK_25213(self):
    --- End diff --
    
    I like that the tests in Scala include this information somewhere. Is there a better place for it in PySpark? I'm not aware of another way to pass extra metadata, but I'm open to if it there's a better way.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org