You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Siddharth Murching (JIRA)" <ji...@apache.org> on 2017/08/21 17:58:00 UTC

[jira] [Created] (SPARK-21799) KMeans Performance Regression (5-6x slowdown) in Spark 2.2

Siddharth Murching created SPARK-21799:
------------------------------------------

             Summary: KMeans Performance Regression (5-6x slowdown) in Spark 2.2
                 Key: SPARK-21799
                 URL: https://issues.apache.org/jira/browse/SPARK-21799
             Project: Spark
          Issue Type: Bug
          Components: MLlib
    Affects Versions: 2.2.0
            Reporter: Siddharth Murching


I've been running KMeans performance tests using [spark-sql-perf|https://github.com/databricks/spark-sql-perf/] and have noticed a regression (slowdowns of 5-6x) when running tests on large datasets in Spark 2.2 vs 2.1.

The test params are:
* Cluster: 510 GB RAM, 16 workers
* Data: 1000000 examples, 10000 features

After talking to [~josephkb], the issue seems related to the changes in [SPARK-18356|https://issues.apache.org/jira/browse/SPARK-18356] introduced in [this PR|https://github.com/apache/spark/pull/16295].

`df.cache()` doesn't set the storageLevel of `df.rdd`, so `handlePersistence` is true even when KMeans is run on a cached DataFrame. This unnecessarily causes another copy of the input dataset to be persisted.

As of Spark 2.1 ([JIRA link|https://issues.apache.org/jira/browse/SPARK-16063]) `df.cache()` does set the public `df.storageLevel` member properly, so I'd suggest replacing instances of `df.rdd.storageLevel` with df.storageLevel` in MLlib algorithms (the same pattern shows up in LogisticRegression, LinearRegression, and others).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org