You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Rostyslav Sotnychenko <r....@gmail.com> on 2016/06/09 15:15:25 UTC

Strange exception while reading Parquet files

Hello!

I have faced a very strange exception (stack-trace in the end of this
email) while trying to read Parquet file using Hive Context from Spark
1.3.1, Hive 0.13.

This issue appears only on YARN (standalone and local are working fine) and
only when HiveContext is used (from SqlContext everything works fine).

After researching for the whole week, I was able to find only two mentions
of this problem! One is an unanswered email to spark-user mailing list
<https://mail-archives.apache.org/mod_mbox/spark-user/201604.mbox/%3C24E8D947D2CC144DA340FBD0F71DD67232B1C912@MAILBOX-HYD.capiqcorp.com%3E>,
second is a on some companies Jira
<https://jira.talendforge.org/browse/TBD-3615>.

The only current workaround I have is upgrading to Spark 1.4.1 but this
isn't a solution.


Does anyone knows how to deal with it?


Thanks in advance,
Rostyslav Sotnychenko



---------- STACK TRACE ------------

16/06/10 00:13:58 ERROR TaskSetManager: Task 0 in stage 2.0 failed 4 times;
aborting job
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage
2.0 (TID 14, 2-op.cluster): java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at parquet.hadoop.ParquetInputSplit.readArray(ParquetInputSplit.java:240)
at parquet.hadoop.ParquetInputSplit.readUTF8(ParquetInputSplit.java:230)
at parquet.hadoop.ParquetInputSplit.readFields(ParquetInputSplit.java:197)
at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:285)
at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:77)
at
org.apache.spark.SerializableWritable$$anonfun$readObject$1.apply$mcV$sp(SerializableWritable.scala:43)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1138)
at
org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1897)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1997)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1921)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1997)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1921)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
at
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:94)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:185)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at scala.Option.foreach(Option.scala:236)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

Re: Strange exception while reading Parquet files

Posted by Takeshi Yamamuro <li...@gmail.com>.
Hi,

Does this issue also occur in v1.6.1 and v2.0-preview?

// maropu

On Thu, Jun 9, 2016 at 8:15 AM, Rostyslav Sotnychenko <
r.sotnychenko@gmail.com> wrote:

> Hello!
>
> I have faced a very strange exception (stack-trace in the end of this
> email) while trying to read Parquet file using Hive Context from Spark
> 1.3.1, Hive 0.13.
>
> This issue appears only on YARN (standalone and local are working fine)
> and only when HiveContext is used (from SqlContext everything works fine).
>
> After researching for the whole week, I was able to find only two mentions
> of this problem! One is an unanswered email to spark-user mailing list
> <https://mail-archives.apache.org/mod_mbox/spark-user/201604.mbox/%3C24E8D947D2CC144DA340FBD0F71DD67232B1C912@MAILBOX-HYD.capiqcorp.com%3E>,
> second is a on some companies Jira
> <https://jira.talendforge.org/browse/TBD-3615>.
>
> The only current workaround I have is upgrading to Spark 1.4.1 but this
> isn't a solution.
>
>
> Does anyone knows how to deal with it?
>
>
> Thanks in advance,
> Rostyslav Sotnychenko
>
>
>
> ---------- STACK TRACE ------------
>
> 16/06/10 00:13:58 ERROR TaskSetManager: Task 0 in stage 2.0 failed 4
> times; aborting job
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
> in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage
> 2.0 (TID 14, 2-op.cluster): java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:392)
> at parquet.hadoop.ParquetInputSplit.readArray(ParquetInputSplit.java:240)
> at parquet.hadoop.ParquetInputSplit.readUTF8(ParquetInputSplit.java:230)
> at parquet.hadoop.ParquetInputSplit.readFields(ParquetInputSplit.java:197)
> at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:285)
> at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:77)
> at
> org.apache.spark.SerializableWritable$$anonfun$readObject$1.apply$mcV$sp(SerializableWritable.scala:43)
> at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1138)
> at
> org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:39)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1897)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1997)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1921)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1997)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1921)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
> at
> org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
> at
> org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:94)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:185)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> Driver stacktrace:
> at org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
> at scala.Option.foreach(Option.scala:236)
> at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>
>


-- 
---
Takeshi Yamamuro