You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Vincent (JIRA)" <ji...@apache.org> on 2017/09/22 04:27:00 UTC
[jira] [Updated] (SPARK-22096) use aggregateByKeyLocally to save
one stage in calculating ItemFrequency in NaiveBayes
[ https://issues.apache.org/jira/browse/SPARK-22096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Vincent updated SPARK-22096:
----------------------------
Description:
NaiveBayes currently takes aggreateByKey followed by a collect to calculate frequency for each feature/label. We can implement a new function 'aggregateByKeyLocally' in RDD that merges locally on each mapper before sending results to a reducer to save one stage.
We tested on NaiveBayes and see ~16% performance gain with these changes.
was:
NaiveBayes currently takes aggreateByKey followed by a collect to calculate frequency for each feature/label. We can implement a new function 'aggregateByKeyLocally' in RDD that merges locally on each mapper before sending results to a reducer to save one stage.
We tested on NaiveBayes and see ~20% performance gain with these changes.
> use aggregateByKeyLocally to save one stage in calculating ItemFrequency in NaiveBayes
> --------------------------------------------------------------------------------------
>
> Key: SPARK-22096
> URL: https://issues.apache.org/jira/browse/SPARK-22096
> Project: Spark
> Issue Type: Improvement
> Components: ML
> Affects Versions: 2.2.0
> Reporter: Vincent
> Priority: Minor
>
> NaiveBayes currently takes aggreateByKey followed by a collect to calculate frequency for each feature/label. We can implement a new function 'aggregateByKeyLocally' in RDD that merges locally on each mapper before sending results to a reducer to save one stage.
> We tested on NaiveBayes and see ~16% performance gain with these changes.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org