You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takuya Ueshin (Jira)" <ji...@apache.org> on 2021/09/30 23:44:00 UTC
[jira] [Created] (SPARK-36907) pandas API on Spark:
DataFrameGroupBy.apply raises an exception when it returns Series.
Takuya Ueshin created SPARK-36907:
-------------------------------------
Summary: pandas API on Spark: DataFrameGroupBy.apply raises an exception when it returns Series.
Key: SPARK-36907
URL: https://issues.apache.org/jira/browse/SPARK-36907
Project: Spark
Issue Type: Bug
Components: PySpark
Affects Versions: 3.2.0, 3.3.0
Reporter: Takuya Ueshin
{{DataFrameGroupBy.apply}} without shortcut could raise an exception when it returns {{Series}}.
{code:python}
>>> ps.options.compute.shortcut_limit = 3
>>> psdf = ps.DataFrame(
... {"a": [1, 2, 3, 4, 5, 6], "b": [1, 1, 2, 3, 5, 8], "c": [1, 4, 9, 16, 25, 36]},
... columns=["a", "b", "c"],
... )
>>> psdf.groupby("b").apply(lambda x: x["a"])
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
...
ValueError: Length mismatch: Expected axis has 2 elements, new values have 3 elements
{code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org