You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Bryan Cutler (JIRA)" <ji...@apache.org> on 2016/09/09 22:09:20 UTC
[jira] [Commented] (SPARK-16834) TrainValildationSplit and direct
evaluation produce different scores
[ https://issues.apache.org/jira/browse/SPARK-16834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15478414#comment-15478414 ]
Bryan Cutler commented on SPARK-16834:
--------------------------------------
[~mmoroz], your sample doesn't quite do the same thing as TrainValidationSplit. The main difference is that the validation set is selected from a fixed seed. Here is a slightly reworked sample that matches pretty well.
{noformat}
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import LinearRegression
from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit
from pyspark.ml.linalg import Vectors
from pyspark.sql import SparkSession
from pyspark.sql.functions import rand
import numpy as np
spark = SparkSession\
.builder\
.appName("TrainValidationSplit")\
.getOrCreate()
dataset = spark.createDataFrame(
[(Vectors.dense([0.0]), 0.0),
(Vectors.dense([0.4]), 1.0),
(Vectors.dense([0.5]), 0.0),
(Vectors.dense([0.6]), 1.0),
(Vectors.dense([1.0]), 1.0)] * 1000,
["features", "label"]).cache()
paramGrid = ParamGridBuilder().build()
# note that test is NEVER used in this code
# I create it only to utilize randomSplit
for i in range(50):
train, test = dataset.randomSplit([0.8, 0.2])
tvs = TrainValidationSplit(estimator=LinearRegression(),
estimatorParamMaps=paramGrid,
evaluator=RegressionEvaluator(),
trainRatio=0.5)
model = tvs.fit(train)
# taken from TrainValidationSplit.fit
seed = tvs.getSeed()
randCol = "manual_tvs_rand"
df = train.select("*", rand(seed).alias(randCol))
condition = (df[randCol] >= 0.5)
validation = df.filter(condition)
train_tvs = df.filter(~condition)
lr=LinearRegression()
evaluator=RegressionEvaluator()
lrModel = lr.fit(train_tvs)
predicted = lrModel.transform(validation)
a = model.validationMetrics[0]
b = evaluator.evaluate(predicted)
print(np.isclose(a, b, atol=1e-15, rtol=0.0), a, b)
spark.stop()
{noformat}
> TrainValildationSplit and direct evaluation produce different scores
> --------------------------------------------------------------------
>
> Key: SPARK-16834
> URL: https://issues.apache.org/jira/browse/SPARK-16834
> Project: Spark
> Issue Type: Bug
> Components: ML, PySpark
> Affects Versions: 2.0.0
> Reporter: Max Moroz
>
> The two segments of code below are supposed to do the same thing: one is using TrainValidationSplit, the other performs the same evaluation manually. However, their results are statistically different (in my case, in a loop of 20, I regularly get ~19 True values).
> Unfortunately, I didn't find the bug in the source code.
> {code}
> dataset = spark.createDataFrame(
> [(Vectors.dense([0.0]), 0.0),
> (Vectors.dense([0.4]), 1.0),
> (Vectors.dense([0.5]), 0.0),
> (Vectors.dense([0.6]), 1.0),
> (Vectors.dense([1.0]), 1.0)] * 1000,
> ["features", "label"]).cache()
> paramGrid = pyspark.ml.tuning.ParamGridBuilder().build()
> # note that test is NEVER used in this code
> # I create it only to utilize randomSplit
> for i in range(20):
> train, test = dataset.randomSplit([0.8, 0.2])
> tvs = pyspark.ml.tuning.TrainValidationSplit(estimator=pyspark.ml.regression.LinearRegression(),
> estimatorParamMaps=paramGrid,
> evaluator=pyspark.ml.evaluation.RegressionEvaluator(),
> trainRatio=0.5)
> model = tvs.fit(train)
> train, val, test = dataset.randomSplit([0.4, 0.4, 0.2])
> lr=pyspark.ml.regression.LinearRegression()
> evaluator=pyspark.ml.evaluation.RegressionEvaluator()
> lrModel = lr.fit(train)
> predicted = lrModel.transform(val)
> print(model.validationMetrics[0] < evaluator.evaluate(predicted))
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org