You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by yl...@apache.org on 2017/01/10 10:18:17 UTC

spark git commit: [SPARK-19134][EXAMPLE] Fix several sql, mllib and status api examples not working

Repository: spark
Updated Branches:
  refs/heads/master 3ef6d98a8 -> b0e5840d4


[SPARK-19134][EXAMPLE] Fix several sql, mllib and status api examples not working

## What changes were proposed in this pull request?

**binary_classification_metrics_example.py**

LibSVM datasource loads `ml.linalg.SparseVector` whereas the example requires it to be `mllib.linalg.SparseVector`.  For the equivalent Scala exmaple, `BinaryClassificationMetricsExample.scala` seems fine.

```
./bin/spark-submit examples/src/main/python/mllib/binary_classification_metrics_example.py
```

```
  File ".../spark/examples/src/main/python/mllib/binary_classification_metrics_example.py", line 39, in <lambda>
    .rdd.map(lambda row: LabeledPoint(row[0], row[1]))
  File ".../spark/python/pyspark/mllib/regression.py", line 54, in __init__
    self.features = _convert_to_vector(features)
  File ".../spark/python/pyspark/mllib/linalg/__init__.py", line 80, in _convert_to_vector
    raise TypeError("Cannot convert type %s into Vector" % type(l))
TypeError: Cannot convert type <class 'pyspark.ml.linalg.SparseVector'> into Vector
```

**status_api_demo.py** (this one does not work on Python 3.4.6)

It's `queue` in Python 3+.

```
PYSPARK_PYTHON=python3 ./bin/spark-submit examples/src/main/python/status_api_demo.py
```

```
Traceback (most recent call last):
  File ".../spark/examples/src/main/python/status_api_demo.py", line 22, in <module>
    import Queue
ImportError: No module named 'Queue'
```

**bisecting_k_means_example.py**

`BisectingKMeansModel` does not implement `save` and `load` in Python.

```bash
./bin/spark-submit examples/src/main/python/mllib/bisecting_k_means_example.py
```

```
Traceback (most recent call last):
  File ".../spark/examples/src/main/python/mllib/bisecting_k_means_example.py", line 46, in <module>
    model.save(sc, path)
AttributeError: 'BisectingKMeansModel' object has no attribute 'save'
```

**elementwise_product_example.py**

It calls `collect` from the vector.

```bash
./bin/spark-submit examples/src/main/python/mllib/elementwise_product_example.py
```

```
Traceback (most recent call last):
  File ".../spark/examples/src/main/python/mllib/elementwise_product_example.py", line 48, in <module>
    for each in transformedData2.collect():
  File ".../spark/python/pyspark/mllib/linalg/__init__.py", line 478, in __getattr__
    return getattr(self.array, item)
AttributeError: 'numpy.ndarray' object has no attribute 'collect'
```

**These three tests look throwing an exception for a relative path set in `spark.sql.warehouse.dir`.**

**hive.py**

```
./bin/spark-submit examples/src/main/python/sql/hive.py
```

```
Traceback (most recent call last):
  File ".../spark/examples/src/main/python/sql/hive.py", line 47, in <module>
    spark.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING) USING hive")
  File ".../spark/python/lib/pyspark.zip/pyspark/sql/session.py", line 541, in sql
  File ".../spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File ".../spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 69, in deco
pyspark.sql.utils.AnalysisException: 'org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:./spark-warehouse);'
```

**SparkHiveExample.scala**

```
./bin/run-example sql.hive.SparkHiveExample
```

```
Exception in thread "main" org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table. java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:./spark-warehouse
	at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:498)
	at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:484)
	at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1668)
```

**JavaSparkHiveExample.java**

```
./bin/run-example sql.hive.JavaSparkHiveExample
```

```
Exception in thread "main" org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table. java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:./spark-warehouse
	at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:498)
	at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:484)
	at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1668)
```

## How was this patch tested?

Manually via

```
./bin/spark-submit examples/src/main/python/mllib/binary_classification_metrics_example.py
```

```
PYSPARK_PYTHON=python3 ./bin/spark-submit examples/src/main/python/status_api_demo.py
```

```
./bin/spark-submit examples/src/main/python/mllib/bisecting_k_means_example.py
```

```
./bin/spark-submit examples/src/main/python/mllib/elementwise_product_example.py
```

```
./bin/spark-submit examples/src/main/python/sql/hive.py
```

```
./bin/run-example sql.hive.JavaSparkHiveExample
```

```
./bin/run-example sql.hive.SparkHiveExample
```

These were found via

```bash
find ./examples/src/main/python -name "*.py" -exec spark-submit {} \;
```

Author: hyukjinkwon <gu...@gmail.com>

Closes #16515 from HyukjinKwon/minor-example-fix.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b0e5840d
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/b0e5840d
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/b0e5840d

Branch: refs/heads/master
Commit: b0e5840d4b37d7b73e300671795185bba37effb0
Parents: 3ef6d98
Author: hyukjinkwon <gu...@gmail.com>
Authored: Tue Jan 10 02:18:07 2017 -0800
Committer: Yanbo Liang <yb...@gmail.com>
Committed: Tue Jan 10 02:18:07 2017 -0800

----------------------------------------------------------------------
 .../examples/sql/hive/JavaSparkHiveExample.java      |  3 ++-
 .../mllib/binary_classification_metrics_example.py   | 15 +++++----------
 .../main/python/mllib/bisecting_k_means_example.py   |  5 -----
 .../main/python/mllib/elementwise_product_example.py |  2 +-
 examples/src/main/python/sql/hive.py                 |  4 ++--
 examples/src/main/python/status_api_demo.py          |  6 +++++-
 .../spark/examples/sql/hive/SparkHiveExample.scala   |  4 +++-
 7 files changed, 18 insertions(+), 21 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/b0e5840d/examples/src/main/java/org/apache/spark/examples/sql/hive/JavaSparkHiveExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/sql/hive/JavaSparkHiveExample.java b/examples/src/main/java/org/apache/spark/examples/sql/hive/JavaSparkHiveExample.java
index 8d06d38..2fe1307 100644
--- a/examples/src/main/java/org/apache/spark/examples/sql/hive/JavaSparkHiveExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/sql/hive/JavaSparkHiveExample.java
@@ -17,6 +17,7 @@
 package org.apache.spark.examples.sql.hive;
 
 // $example on:spark_hive$
+import java.io.File;
 import java.io.Serializable;
 import java.util.ArrayList;
 import java.util.List;
@@ -56,7 +57,7 @@ public class JavaSparkHiveExample {
   public static void main(String[] args) {
     // $example on:spark_hive$
     // warehouseLocation points to the default location for managed databases and tables
-    String warehouseLocation = "spark-warehouse";
+    String warehouseLocation = new File("spark-warehouse").getAbsolutePath();
     SparkSession spark = SparkSession
       .builder()
       .appName("Java Spark Hive Example")

http://git-wip-us.apache.org/repos/asf/spark/blob/b0e5840d/examples/src/main/python/mllib/binary_classification_metrics_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/mllib/binary_classification_metrics_example.py b/examples/src/main/python/mllib/binary_classification_metrics_example.py
index 91f8378..d14ce79 100644
--- a/examples/src/main/python/mllib/binary_classification_metrics_example.py
+++ b/examples/src/main/python/mllib/binary_classification_metrics_example.py
@@ -18,25 +18,20 @@
 Binary Classification Metrics Example.
 """
 from __future__ import print_function
-from pyspark.sql import SparkSession
+from pyspark import SparkContext
 # $example on$
 from pyspark.mllib.classification import LogisticRegressionWithLBFGS
 from pyspark.mllib.evaluation import BinaryClassificationMetrics
-from pyspark.mllib.regression import LabeledPoint
+from pyspark.mllib.util import MLUtils
 # $example off$
 
 if __name__ == "__main__":
-    spark = SparkSession\
-        .builder\
-        .appName("BinaryClassificationMetricsExample")\
-        .getOrCreate()
+    sc = SparkContext(appName="BinaryClassificationMetricsExample")
 
     # $example on$
     # Several of the methods available in scala are currently missing from pyspark
     # Load training data in LIBSVM format
-    data = spark\
-        .read.format("libsvm").load("data/mllib/sample_binary_classification_data.txt")\
-        .rdd.map(lambda row: LabeledPoint(row[0], row[1]))
+    data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_binary_classification_data.txt")
 
     # Split data into training (60%) and test (40%)
     training, test = data.randomSplit([0.6, 0.4], seed=11)
@@ -58,4 +53,4 @@ if __name__ == "__main__":
     print("Area under ROC = %s" % metrics.areaUnderROC)
     # $example off$
 
-    spark.stop()
+    sc.stop()

http://git-wip-us.apache.org/repos/asf/spark/blob/b0e5840d/examples/src/main/python/mllib/bisecting_k_means_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/mllib/bisecting_k_means_example.py b/examples/src/main/python/mllib/bisecting_k_means_example.py
index 7f4d040..31f3e72 100644
--- a/examples/src/main/python/mllib/bisecting_k_means_example.py
+++ b/examples/src/main/python/mllib/bisecting_k_means_example.py
@@ -40,11 +40,6 @@ if __name__ == "__main__":
     # Evaluate clustering
     cost = model.computeCost(parsedData)
     print("Bisecting K-means Cost = " + str(cost))
-
-    # Save and load model
-    path = "target/org/apache/spark/PythonBisectingKMeansExample/BisectingKMeansModel"
-    model.save(sc, path)
-    sameModel = BisectingKMeansModel.load(sc, path)
     # $example off$
 
     sc.stop()

http://git-wip-us.apache.org/repos/asf/spark/blob/b0e5840d/examples/src/main/python/mllib/elementwise_product_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/mllib/elementwise_product_example.py b/examples/src/main/python/mllib/elementwise_product_example.py
index 6d8bf6d..8ae9afb 100644
--- a/examples/src/main/python/mllib/elementwise_product_example.py
+++ b/examples/src/main/python/mllib/elementwise_product_example.py
@@ -45,7 +45,7 @@ if __name__ == "__main__":
         print(each)
 
     print("transformedData2:")
-    for each in transformedData2.collect():
+    for each in transformedData2:
         print(each)
 
     sc.stop()

http://git-wip-us.apache.org/repos/asf/spark/blob/b0e5840d/examples/src/main/python/sql/hive.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/sql/hive.py b/examples/src/main/python/sql/hive.py
index ba01544..1f175d7 100644
--- a/examples/src/main/python/sql/hive.py
+++ b/examples/src/main/python/sql/hive.py
@@ -18,7 +18,7 @@
 from __future__ import print_function
 
 # $example on:spark_hive$
-from os.path import expanduser, join
+from os.path import expanduser, join, abspath
 
 from pyspark.sql import SparkSession
 from pyspark.sql import Row
@@ -34,7 +34,7 @@ Run with:
 if __name__ == "__main__":
     # $example on:spark_hive$
     # warehouse_location points to the default location for managed databases and tables
-    warehouse_location = 'spark-warehouse'
+    warehouse_location = abspath('spark-warehouse')
 
     spark = SparkSession \
         .builder \

http://git-wip-us.apache.org/repos/asf/spark/blob/b0e5840d/examples/src/main/python/status_api_demo.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/status_api_demo.py b/examples/src/main/python/status_api_demo.py
index 49b7902..8cc8cc8 100644
--- a/examples/src/main/python/status_api_demo.py
+++ b/examples/src/main/python/status_api_demo.py
@@ -19,7 +19,11 @@ from __future__ import print_function
 
 import time
 import threading
-import Queue
+import sys
+if sys.version >= '3':
+    import queue as Queue
+else:
+    import Queue
 
 from pyspark import SparkConf, SparkContext
 

http://git-wip-us.apache.org/repos/asf/spark/blob/b0e5840d/examples/src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala
----------------------------------------------------------------------
diff --git a/examples/src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala b/examples/src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala
index d29ed95..3de2636 100644
--- a/examples/src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala
+++ b/examples/src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala
@@ -17,6 +17,8 @@
 package org.apache.spark.examples.sql.hive
 
 // $example on:spark_hive$
+import java.io.File
+
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.SparkSession
 // $example off:spark_hive$
@@ -38,7 +40,7 @@ object SparkHiveExample {
 
     // $example on:spark_hive$
     // warehouseLocation points to the default location for managed databases and tables
-    val warehouseLocation = "spark-warehouse"
+    val warehouseLocation = new File("spark-warehouse").getAbsolutePath
 
     val spark = SparkSession
       .builder()


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org