You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Herman van Hovell (JIRA)" <ji...@apache.org> on 2017/03/27 12:03:42 UTC

[jira] [Comment Edited] (SPARK-20106) Nonlazy caching of DataFrame after orderBy/sort

    [ https://issues.apache.org/jira/browse/SPARK-20106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943133#comment-15943133 ] 

Herman van Hovell edited comment on SPARK-20106 at 3/27/17 12:03 PM:
---------------------------------------------------------------------

Caching requires the backing RDD. That requires we also know the backing partitions, and this is somewhat special for a global order: it triggers a job (scan) because we need to determine the partition bounds.

I am closing this as not a problem.


was (Author: hvanhovell):
Caching requires use the backing RDD. That requires we also know the backing partitions, and this is somewhat special for a global order: it triggers a job (scan) because we need to determine the partition bounds.

I am closing this as not a problem.

> Nonlazy caching of DataFrame after orderBy/sort
> -----------------------------------------------
>
>                 Key: SPARK-20106
>                 URL: https://issues.apache.org/jira/browse/SPARK-20106
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SQL
>    Affects Versions: 2.0.1, 2.1.0
>            Reporter: Richard Liebscher
>            Priority: Minor
>
> Calling {{cache}} or {{persist}} after a call to {{orderBy}} or {{sortBy}} on a DataFrame runs not lazy and creates a Spark job:
> {code}spark.range(1, 1000).orderBy("id").cache(){code}
> Other operations do not generate a job when cached:
> {code}spark.range(1, 1000).repartition(2).cache()
> spark.range(1, 1000).groupBy("id").agg(fn.min("id")).cache()
> spark.range(1, 1000).cache(){code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org