You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (JIRA)" <ji...@apache.org> on 2016/07/08 19:20:11 UTC
[jira] [Issue Comment Deleted] (SPARK-16449) unionAll raises "Task
not serializable"
[ https://issues.apache.org/jira/browse/SPARK-16449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dongjoon Hyun updated SPARK-16449:
----------------------------------
Comment: was deleted
(was: Oh, I see.)
> unionAll raises "Task not serializable"
> ---------------------------------------
>
> Key: SPARK-16449
> URL: https://issues.apache.org/jira/browse/SPARK-16449
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 1.6.1
> Environment: AWS EMR, Jupyter notebook
> Reporter: Jeff Levy
> Priority: Minor
>
> Goal: Take the output from `describe` on a large DataFrame, then use a loop to calculate `skewness` and `kurtosis` from pyspark.sql.functions for each column, build them into a DataFrame of two rows, then use `unionAll` to merge them together.
> Issue: Despite having the same column names, in the same order with the same dtypes, the `unionAll` fails with "Task not serializable". However, if I build two test rows using dummy data then `unionAll` works fine. Also, if I collect my results then turn them straight back into DataFrames, `unionAll` succeeds.
> Step-by-step code and output with comments can be seen here: https://github.com/UrbanInstitute/pyspark-tutorials/blob/master/unionAll%20error.ipynb
> The issue appears to be in the way the loop in code block 6 is building the rows before parallelizing, but the results look no different from the test rows that do work. I reproduced this on multiple datasets, so downloading the notebook and pointing it to any data of your own should replicate it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org