You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "shahid (JIRA)" <ji...@apache.org> on 2019/03/21 15:11:00 UTC
[jira] [Updated] (SPARK-27231) Stack overflow error, when we
increase the number of iteration in PIC
[ https://issues.apache.org/jira/browse/SPARK-27231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
shahid updated SPARK-27231:
---------------------------
Description:
val dataset = spark.createDataFrame(Seq(
(0L, 1L, 1.0),
(0L, 2L, 1.0),
(1L, 2L, 1.0),
(3L, 4L, 1.0),
(4L, 0L, 0.1)
)).toDF("src", "dst", "weight")
val model = new PowerIterationClustering().
setK(2).
setMaxIter(100).
setInitMode("degree").
setWeightCol("weight")
val prediction = model.assignClusters(dataset).select("id", "cluster")
java.lang.StackOverflowError
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2188)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1568)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2282)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2206)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
was:
val dataset = spark.createDataFrame(Seq(
(0L, 1L, 1.0),
(0L, 2L, 1.0),
(1L, 2L, 1.0),
(3L, 4L, 1.0),
(4L, 0L, 0.1)
)).toDF("src", "dst", "weight")
val model = new PowerIterationClustering().
setK(2).
setMaxIter(100).
setInitMode("degree").
setWeightCol("weight")
val prediction = model.assignClusters(dataset).select("id", "cluster")
> Stack overflow error, when we increase the number of iteration in PIC
> ---------------------------------------------------------------------
>
> Key: SPARK-27231
> URL: https://issues.apache.org/jira/browse/SPARK-27231
> Project: Spark
> Issue Type: Bug
> Components: MLlib
> Affects Versions: 2.3.3, 2.4.0
> Reporter: shahid
> Priority: Minor
>
> val dataset = spark.createDataFrame(Seq(
> (0L, 1L, 1.0),
> (0L, 2L, 1.0),
> (1L, 2L, 1.0),
> (3L, 4L, 1.0),
> (4L, 0L, 0.1)
> )).toDF("src", "dst", "weight")
> val model = new PowerIterationClustering().
> setK(2).
> setMaxIter(100).
> setInitMode("degree").
> setWeightCol("weight")
> val prediction = model.assignClusters(dataset).select("id", "cluster")
> java.lang.StackOverflowError
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2188)
> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1568)
> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2282)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2206)
> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org