You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Peng Meng (JIRA)" <ji...@apache.org> on 2017/07/12 15:00:03 UTC

[jira] [Created] (SPARK-21389) ALS recommendForAll optimization uses Native BLAS

Peng Meng created SPARK-21389:
---------------------------------

             Summary: ALS recommendForAll optimization uses Native BLAS
                 Key: SPARK-21389
                 URL: https://issues.apache.org/jira/browse/SPARK-21389
             Project: Spark
          Issue Type: Improvement
          Components: ML, MLlib
    Affects Versions: 2.3.0
            Reporter: Peng Meng


In Spark 2.2, we have optimized ALS recommendForAll, which uses a handwriting matrix multiplication, and get the topK items for each matrix. The method effectively reduce the GC problem. However, Native BLAS GEMM, like Intel MKL, and OpenBLAS, the performance of matrix multiplication is about 10X comparing with handwriting method. 

I have rewritten the code of recommendForAll with GEMM, and got about 20%-30% improvement comparing with the master recommendForAll method. 

Will clean the code and submit for discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org