You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/03/21 15:08:39 UTC

[GitHub] [spark] shahidki31 commented on a change in pull request #20793: [SPARK-23643][CORE][SQL][ML] Shrinking the buffer in hashSeed up to size of the seed parameter

shahidki31 commented on a change in pull request #20793: [SPARK-23643][CORE][SQL][ML] Shrinking the buffer in hashSeed up to size of the seed parameter
URL: https://github.com/apache/spark/pull/20793#discussion_r267804279
 
 

 ##########
 File path: mllib/src/test/scala/org/apache/spark/mllib/clustering/PowerIterationClusteringSuite.scala
 ##########
 @@ -48,7 +48,7 @@ class PowerIterationClusteringSuite extends SparkFunSuite with MLlibTestSparkCon
     // Generate two circles following the example in the PIC paper.
     val r1 = 1.0
     val n1 = 10
-    val r2 = 4.0
+    val r2 = 14.0
 
 Review comment:
   While analyzing this issue, I too encountered the stack overflow error. I will create a JIRA  for it. 
   
   Ideally, PIC expects a random vector input, and it converges to an optimal value, as the number of iterations increases. So, it shouldn't depend on "seed" ideally, provided it converges.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org