You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Bjørn Jørgensen (Jira)" <ji...@apache.org> on 2022/12/29 18:03:00 UTC
[jira] [Updated] (SPARK-41774) Remove def test_vectorized_udf_unsupported_types
[ https://issues.apache.org/jira/browse/SPARK-41774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Bjørn Jørgensen updated SPARK-41774:
------------------------------------
Description:
https://github.com/apache/spark/blob/18488158beee5435f99899f99b2e90fb6e37f3d5/python/pyspark/sql/tests/pandas/test_pandas_udf_scalar.py#L603
{code:java}
def test_vectorized_udf_wrong_return_type(self):
with QuietTest(self.sc):
for udf_type in [PandasUDFType.SCALAR, PandasUDFType.SCALAR_ITER]:
with self.assertRaisesRegex(
NotImplementedError,
"Invalid return type.*scalar Pandas UDF.*ArrayType.*TimestampType",
):
pandas_udf(lambda x: x, ArrayType(TimestampType()), udf_type)
{code}
is the same code as
https://github.com/apache/spark/blob/18488158beee5435f99899f99b2e90fb6e37f3d5/python/pyspark/sql/tests/pandas/test_pandas_udf_scalar.py#L679
{code:java}
def test_vectorized_udf_unsupported_types(self):
with QuietTest(self.sc):
for udf_type in [PandasUDFType.SCALAR, PandasUDFType.SCALAR_ITER]:
with self.assertRaisesRegex(
NotImplementedError,
"Invalid return type.*scalar Pandas UDF.*ArrayType.*TimestampType",
):
pandas_udf(lambda x: x, ArrayType(TimestampType()), udf_type)
{code}
So we can remove one or fix the typo.
was:
https://github.com/apache/spark/blob/18488158beee5435f99899f99b2e90fb6e37f3d5/python/pyspark/sql/tests/pandas/test_pandas_udf_scalar.py#L603
{code:java}
def test_vectorized_udf_wrong_return_type(self):
with QuietTest(self.sc):
for udf_type in [PandasUDFType.SCALAR, PandasUDFType.SCALAR_ITER]:
with self.assertRaisesRegex(
NotImplementedError,
"Invalid return type.*scalar Pandas UDF.*ArrayType.*TimestampType",
{code}
is the same code as
https://github.com/apache/spark/blob/18488158beee5435f99899f99b2e90fb6e37f3d5/python/pyspark/sql/tests/pandas/test_pandas_udf_scalar.py#L679
{code:java}
def test_vectorized_udf_unsupported_types(self):
with QuietTest(self.sc):
for udf_type in [PandasUDFType.SCALAR, PandasUDFType.SCALAR_ITER]:
with self.assertRaisesRegex(
NotImplementedError,
"Invalid return type.*scalar Pandas UDF.*ArrayType.*TimestampType",
):
pandas_udf(lambda x: x, ArrayType(TimestampType()), udf_type)
{code}
So we can remove one or fix the typo.
> Remove def test_vectorized_udf_unsupported_types
> ------------------------------------------------
>
> Key: SPARK-41774
> URL: https://issues.apache.org/jira/browse/SPARK-41774
> Project: Spark
> Issue Type: Improvement
> Components: Pandas API on Spark
> Affects Versions: 3.4.0
> Reporter: Bjørn Jørgensen
> Priority: Trivial
>
> https://github.com/apache/spark/blob/18488158beee5435f99899f99b2e90fb6e37f3d5/python/pyspark/sql/tests/pandas/test_pandas_udf_scalar.py#L603
> {code:java}
> def test_vectorized_udf_wrong_return_type(self):
> with QuietTest(self.sc):
> for udf_type in [PandasUDFType.SCALAR, PandasUDFType.SCALAR_ITER]:
> with self.assertRaisesRegex(
> NotImplementedError,
> "Invalid return type.*scalar Pandas UDF.*ArrayType.*TimestampType",
> ):
> pandas_udf(lambda x: x, ArrayType(TimestampType()), udf_type)
> {code}
> is the same code as
> https://github.com/apache/spark/blob/18488158beee5435f99899f99b2e90fb6e37f3d5/python/pyspark/sql/tests/pandas/test_pandas_udf_scalar.py#L679
> {code:java}
> def test_vectorized_udf_unsupported_types(self):
> with QuietTest(self.sc):
> for udf_type in [PandasUDFType.SCALAR, PandasUDFType.SCALAR_ITER]:
> with self.assertRaisesRegex(
> NotImplementedError,
> "Invalid return type.*scalar Pandas UDF.*ArrayType.*TimestampType",
> ):
> pandas_udf(lambda x: x, ArrayType(TimestampType()), udf_type)
> {code}
> So we can remove one or fix the typo.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org