You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:20:08 UTC
[jira] [Updated] (SPARK-10870) Criteo Display Advertising Challenge
[ https://issues.apache.org/jira/browse/SPARK-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon updated SPARK-10870:
---------------------------------
Labels: bulk-closed (was: )
> Criteo Display Advertising Challenge
> ------------------------------------
>
> Key: SPARK-10870
> URL: https://issues.apache.org/jira/browse/SPARK-10870
> Project: Spark
> Issue Type: Sub-task
> Components: ML
> Reporter: Peter Rudenko
> Priority: Major
> Labels: bulk-closed
>
> Very useful dataset to test pipeline because of:
> # "Big data" dataset - original Kaggle competition dataset is 12 gb, but there's [1tb|http://labs.criteo.com/downloads/download-terabyte-click-logs/] dataset of the same schema as well.
> # Sparse models - categorical features has high cardinality
> # Reproducible results - because it's public and many other distributed machine learning libraries (e.g. [wormwhole|https://github.com/dmlc/wormhole/blob/master/doc/tutorial/criteo_kaggle.rst], [parameter server|https://github.com/dmlc/parameter_server/blob/master/example/linear/criteo/README.md], [azure ml|https://azure.microsoft.com/en-us/documentation/articles/machine-learning-data-science-process-hive-criteo-walkthrough/#mltasks] etc.) have made a base line benchmarks on which we could compare.
> I have some base line results with custom models (GBDT encoders and tuned LR) on spark-1.4. Will make pipelines using public spark model. [Winning solution|http://www.csie.ntu.edu.tw/~r01922136/kaggle-2014-criteo.pdf] used GBDT encoder (not available in spark, but not difficult to make one from GBT from mllib) + hashing + factorization machine (planned for spark-1.6).
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org