You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2017/03/03 10:25:45 UTC
[jira] [Commented] (SPARK-19808) About the default blocking arg in
unpersist
[ https://issues.apache.org/jira/browse/SPARK-19808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15894087#comment-15894087 ]
Sean Owen commented on SPARK-19808:
-----------------------------------
(Maybe you can rewrite this as a proposed change rather than question?)
They should be consistent, but I don't think they're worth changing now because it's a behavior change for little gain. Consider also the destroy() and unpersist() operations for broadcasts.
However I have never been sure why an application would want to block waiting on an unpersist operation. For that reason, I think most calls in Spark are blocking=false and I'd personally support making this consistent. That is, unless someone highlights why this sometimes isn't a good idea?
> About the default blocking arg in unpersist
> -------------------------------------------
>
> Key: SPARK-19808
> URL: https://issues.apache.org/jira/browse/SPARK-19808
> Project: Spark
> Issue Type: Question
> Components: ML, Spark Core
> Affects Versions: 2.1.0
> Reporter: zhengruifeng
> Priority: Minor
>
> Now, {{unpersist}} are commonly used with default value in ML.
> Most algorithms like {{KMeans}} use {{RDD.unpersisit}} and the default {{blocking}} is {{true}}
> And for meta algorithms like {{OneVsRest}}, {{CrossValidator}} use {{Dataset.unpersist}} and the default {{blocking}} is {{false}}
> Should the default value for {{RDD.unpersisit}} and {{Dataset.unpersist}} be consistent?
> And all the {{blocking}} arg in ML should be set {{false}}?
> [~srowen] [~mlnick] [~yanboliang]
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org