You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sam Stoelinga (JIRA)" <ji...@apache.org> on 2016/01/21 05:46:40 UTC
[jira] [Created] (SPARK-12947) Spark with Swift throws EOFException
when reading parquet file
Sam Stoelinga created SPARK-12947:
-------------------------------------
Summary: Spark with Swift throws EOFException when reading parquet file
Key: SPARK-12947
URL: https://issues.apache.org/jira/browse/SPARK-12947
Project: Spark
Issue Type: Bug
Components: ML
Affects Versions: 1.6.0
Environment: Spark 1.6.0-SNAPSHOT
Reporter: Sam Stoelinga
I'm using Swift as underlying storage for my spark jobs but it sometimes throws EOFExceptions for some parts of the data.
Another user has hit the same issue: http://stackoverflow.com/questions/32400137/spark-swift-integration-parquet
Code to reproduce:
```
val features = sqlContext.read.parquet(featurePath)
// Flatten the features into the array exploded
val exploded = features.select(explode(features("features"))).toDF("features")
val kmeans = new KMeans()
.setK(k)
.setFeaturesCol("features")
.setPredictionCol("prediction")
val model = kmeans.fit(exploded)
```
val features is a dataframe with 2 columns:
image: String, features: Array[Vector]
val exploded is a dataframe with a single column:
features: Vector
The following exception is shown when running takeSample on a large dataset saved as parquet file (~1+GB):
java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:197)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:756)
at org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:494)
at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:208)
at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.hasNext(SqlNewHadoopRDD.scala:168)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:350)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.rdd.RDD$$anonfun$zip$1$$anonfun$apply$30$$anon$1.hasNext(RDD.scala:827)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1563)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1119)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1119)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1840)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1840)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org