You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2021/02/09 14:01:39 UTC

[spark] branch branch-3.1 updated: [MINOR][ML][TESTS] Increase tolerance to make NaiveBayesSuite more robust

This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
     new 800be71  [MINOR][ML][TESTS] Increase tolerance to make NaiveBayesSuite more robust
800be71 is described below

commit 800be7182f764f13f32f77a453c08917751916ec
Author: Weichen Xu <we...@databricks.com>
AuthorDate: Tue Feb 9 23:00:13 2021 +0900

    [MINOR][ML][TESTS] Increase tolerance to make NaiveBayesSuite more robust
    
    ### What changes were proposed in this pull request?
    Increase the rel tol from 0.2 to 0.35.
    
    ### Why are the changes needed?
    Fix flaky test
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    UT.
    
    Closes #31536 from WeichenXu123/ES-65815.
    
    Authored-by: Weichen Xu <we...@databricks.com>
    Signed-off-by: HyukjinKwon <gu...@apache.org>
    (cherry picked from commit 18b30107adb37d3c7a767a20cc02813f0fdb86da)
    Signed-off-by: HyukjinKwon <gu...@apache.org>
---
 .../scala/org/apache/spark/mllib/classification/NaiveBayesSuite.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mllib/src/test/scala/org/apache/spark/mllib/classification/NaiveBayesSuite.scala b/mllib/src/test/scala/org/apache/spark/mllib/classification/NaiveBayesSuite.scala
index 4de1084..b9d83dd 100644
--- a/mllib/src/test/scala/org/apache/spark/mllib/classification/NaiveBayesSuite.scala
+++ b/mllib/src/test/scala/org/apache/spark/mllib/classification/NaiveBayesSuite.scala
@@ -107,9 +107,9 @@ class NaiveBayesSuite extends SparkFunSuite with MLlibTestSparkContext {
     val modelIndex = piData.indices.zip(model.labels.map(_.toInt))
     try {
       for (i <- modelIndex) {
-        assert(piData(i._2) ~== model.pi(i._1) relTol 0.2)
+        assert(piData(i._2) ~== model.pi(i._1) relTol 0.35)
         for (j <- thetaData(i._2).indices) {
-          assert(thetaData(i._2)(j) ~== model.theta(i._1)(j) relTol 0.2)
+          assert(thetaData(i._2)(j) ~== model.theta(i._1)(j) relTol 0.35)
         }
       }
     } catch {


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org