You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Max Moroz (JIRA)" <ji...@apache.org> on 2016/08/01 18:12:21 UTC

[jira] [Comment Edited] (SPARK-16834) TrainValildationSplit and direct evaluation produce different scores

    [ https://issues.apache.org/jira/browse/SPARK-16834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15402546#comment-15402546 ] 

Max Moroz edited comment on SPARK-16834 at 8/1/16 6:11 PM:
-----------------------------------------------------------

[~sowen] The two code excerpts are different, but only in terms of which random functions are used to create the train/val splits. Otherwise, the code does the same thing. Of course, the result should be slightly different, that's not a problem.

The problem is that the difference in the results is highly statistically significant; in addition, it's also practically significant (if you actually print out the numbers instead of the True/False like I did, you'll see the differences are meaningful). That shouldn't happen if randomization is done properly.

I can dig deeper, but wanted to get some feedback first before I spend more time.


was (Author: mmoroz):
[~sowen] They are different, but they do the same thing. Of course, the result should be slightly different, that's not a problem.

The problem is that the difference in the results is highly statistically significant; in addition, it's also practically significant (if you actually print out the numbers instead of the True/False like I did, you'll see the differences are meaningful). That shouldn't happen if randomization is done properly.

I can dig deeper, but wanted to get some feedback first before I spend more time.

> TrainValildationSplit and direct evaluation produce different scores
> --------------------------------------------------------------------
>
>                 Key: SPARK-16834
>                 URL: https://issues.apache.org/jira/browse/SPARK-16834
>             Project: Spark
>          Issue Type: Bug
>          Components: ML, PySpark
>    Affects Versions: 2.0.0
>            Reporter: Max Moroz
>
> The two segments of code below are supposed to do the same thing: one is using TrainValidationSplit, the other performs the same evaluation manually. However, their results are statistically different (in my case, in a loop of 20, I regularly get ~19 True values). 
> Unfortunately, I didn't find the bug in the source code.
> {code}
> dataset = spark.createDataFrame(
>   [(Vectors.dense([0.0]), 0.0),
>    (Vectors.dense([0.4]), 1.0),
>    (Vectors.dense([0.5]), 0.0),
>    (Vectors.dense([0.6]), 1.0),
>    (Vectors.dense([1.0]), 1.0)] * 1000,
>   ["features", "label"]).cache()
> paramGrid = pyspark.ml.tuning.ParamGridBuilder().build()
> # note that test is NEVER used in this code
> # I create it only to utilize randomSplit
> for i in range(20):
>   train, test = dataset.randomSplit([0.8, 0.2])
>   tvs = pyspark.ml.tuning.TrainValidationSplit(estimator=pyspark.ml.regression.LinearRegression(), 
>                              estimatorParamMaps=paramGrid,
>                              evaluator=pyspark.ml.evaluation.RegressionEvaluator(),
>                              trainRatio=0.5)
>   model = tvs.fit(train)
>   train, val, test = dataset.randomSplit([0.4, 0.4, 0.2])
>   lr=pyspark.ml.regression.LinearRegression()
>   evaluator=pyspark.ml.evaluation.RegressionEvaluator()
>   lrModel = lr.fit(train)
>   predicted = lrModel.transform(val)
>   print(model.validationMetrics[0] < evaluator.evaluate(predicted))
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org