You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by sr...@apache.org on 2019/02/01 01:34:08 UTC

[spark] branch master updated: [SPARK-25997][ML] add Python example code for Power Iteration Clustering in spark.ml

This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new f7d87b1  [SPARK-25997][ML] add Python example code for Power Iteration Clustering in spark.ml
f7d87b1 is described below

commit f7d87b1685eeac3a2c8b903ddb86e1921fcc193c
Author: Huaxin Gao <hu...@us.ibm.com>
AuthorDate: Thu Jan 31 19:33:44 2019 -0600

    [SPARK-25997][ML] add Python example code for Power Iteration Clustering in spark.ml
    
    ## What changes were proposed in this pull request?
    
    Add python example for Power Iteration Clustering in spark.ml
    
    ## How was this patch tested?
    
    Manually tested
    
    Closes #22996 from huaxingao/spark-25997.
    
    Authored-by: Huaxin Gao <hu...@us.ibm.com>
    Signed-off-by: Sean Owen <se...@databricks.com>
---
 docs/ml-clustering.md                              |  6 +++
 .../ml/power_iteration_clustering_example.py       | 49 ++++++++++++++++++++++
 2 files changed, 55 insertions(+)

diff --git a/docs/ml-clustering.md b/docs/ml-clustering.md
index 65f2652..21e38ca 100644
--- a/docs/ml-clustering.md
+++ b/docs/ml-clustering.md
@@ -298,6 +298,12 @@ Refer to the [Java API docs](api/java/org/apache/spark/ml/clustering/PowerIterat
 {% include_example java/org/apache/spark/examples/ml/JavaPowerIterationClusteringExample.java %}
 </div>
 
+<div data-lang="python" markdown="1">
+Refer to the [Python API docs](api/python/pyspark.ml.html#pyspark.ml.clustering.PowerIterationClustering) for more details.
+
+{% include_example python/ml/power_iteration_clustering_example.py %}
+</div>
+
 <div data-lang="r" markdown="1">
 
 Refer to the [R API docs](api/R/spark.powerIterationClustering.html) for more details.
diff --git a/examples/src/main/python/ml/power_iteration_clustering_example.py b/examples/src/main/python/ml/power_iteration_clustering_example.py
new file mode 100644
index 0000000..c983c4a
--- /dev/null
+++ b/examples/src/main/python/ml/power_iteration_clustering_example.py
@@ -0,0 +1,49 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""
+An example demonstrating PowerIterationClustering.
+Run with:
+  bin/spark-submit examples/src/main/python/ml/power_iteration_clustering_example.py
+"""
+# $example on$
+from pyspark.ml.clustering import PowerIterationClustering
+# $example off$
+from pyspark.sql import SparkSession
+
+if __name__ == "__main__":
+    spark = SparkSession\
+        .builder\
+        .appName("PowerIterationClusteringExample")\
+        .getOrCreate()
+
+    # $example on$
+    df = spark.createDataFrame([
+        (0, 1, 1.0),
+        (0, 2, 1.0),
+        (1, 2, 1.0),
+        (3, 4, 1.0),
+        (4, 0, 0.1)
+    ], ["src", "dst", "weight"])
+
+    pic = PowerIterationClustering(k=2, maxIter=20, initMode="degree", weightCol="weight")
+
+    # Shows the cluster assignment
+    pic.assignClusters(df).show()
+    # $example off$
+
+    spark.stop()


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org