You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sameer Agarwal (JIRA)" <ji...@apache.org> on 2018/01/08 20:35:00 UTC

[jira] [Updated] (SPARK-18388) Running aggregation on many columns throws SOE

     [ https://issues.apache.org/jira/browse/SPARK-18388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sameer Agarwal updated SPARK-18388:
-----------------------------------
    Target Version/s: 2.4.0  (was: 2.3.0)

> Running aggregation on many columns throws SOE
> ----------------------------------------------
>
>                 Key: SPARK-18388
>                 URL: https://issues.apache.org/jira/browse/SPARK-18388
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.5.2, 1.6.2, 2.0.1
>         Environment: PySpark 2.0.1, Jupyter
>            Reporter: Raviteja Lokineni
>         Attachments: spark-bug-jupyter.py, spark-bug-stacktrace.txt, spark-bug.csv
>
>
> Usecase: I am generating weekly aggregates of every column of data
> {code}
> from pyspark.sql.window import Window
> from pyspark.sql.functions import *
> timeSeries = sqlContext.read.option("header", "true").format("org.apache.spark.sql.execution.datasources.csv.CSVFileFormat").load("file:///tmp/spark-bug.csv")
> # Hive timestamp is interpreted as UNIX timestamp in seconds*
> days = lambda i: i * 86400
> w = (Window()
>      .partitionBy("id")
>      .orderBy(col("dt").cast("timestamp").cast("long"))
>      .rangeBetween(-days(6), 0))
> cols = ["id", "dt"]
> skipCols = ["id", "dt"]
> for col in timeSeries.columns:
>     if col in skipCols:
>         continue
>     cols.append(mean(col).over(w).alias("mean_7_"+col))
>     cols.append(count(col).over(w).alias("count_7_"+col))
>     cols.append(sum(col).over(w).alias("sum_7_"+col))
>     cols.append(min(col).over(w).alias("min_7_"+col))
>     cols.append(max(col).over(w).alias("max_7_"+col))
> df = timeSeries.select(cols)
> df.orderBy('id', 'dt').write.format("org.apache.spark.sql.execution.datasources.csv.CSVFileFormat").save("file:///tmp/spark-bug-out.csv")
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org