You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Archit Thakur <ar...@gmail.com> on 2014/01/01 17:22:05 UTC

Not able to understand Exception.

I have recently moved to Kryo for serialization to get better performance.
Have written some of the serializers for my custom DS.
What could below exception be about: (I dont see any of my code line in the
stack trace)

java.lang.ArrayIndexOutOfBoundsException: -2
        at java.util.ArrayList.get(Unknown Source)
        at
com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
        at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:773)
        at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:727)
        at
org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:106)
        at
org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:101)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
        at scala.collection.Iterator$$anon$21.hasNext(Iterator.scala:440)
        at
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:26)
        at scala.collection.Iterator$class.foreach(Iterator.scala:772)
        at
org.apache.spark.util.CompletionIterator.foreach(CompletionIterator.scala:23)
        at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:102)
        at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:250)
        at
org.apache.spark.util.CompletionIterator.toBuffer(CompletionIterator.scala:23)
        at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:237)
        at
org.apache.spark.util.CompletionIterator.toArray(CompletionIterator.scala:23)
        at
org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:44)
        at
org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:43)
        at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:36)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)
        at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:29)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)
        at org.apache.spark.scheduler.ResultTask.run(ResultTask.scala:99)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:158)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
Source)
        at java.lang.Thread.run(Unknown Source)

Any ideas? or Suggestions would help.

Thanks,
Archit.

Re: Not able to understand Exception.

Posted by Archit Thakur <ar...@gmail.com>.
Yes, I am using My Custom Data Structures (for Key and Value) and have
registered different serializers with Kryo by

<Code>
kryo.register(classOf[MyClass], MyCustomSerializerInstance);
</Code>

Thanks and Regards,
Archit Thakur.


On Thu, Jan 2, 2014 at 4:26 AM, Christopher Nguyen <ct...@adatao.com> wrote:

> Archit, this occurs in the ResultTask phase, triggered by the call to
> sortByKey. Prior to this, your RDD would have been serialized for, e.g.,
> shuffling around.
>
> So it looks like Kryo wasn't able to deserialize some part of the RDD for
> some reason, possible due to formatting incompatibility. Did you say you
> wrote your own serializers?
>
> --
> Christopher T. Nguyen
> Co-founder & CEO, Adatao <http://adatao.com>
> linkedin.com/in/ctnguyen
>
>
>
> On Wed, Jan 1, 2014 at 8:22 AM, Archit Thakur <ar...@gmail.com>wrote:
>
>> I have recently moved to Kryo for serialization to get better
>> performance. Have written some of the serializers for my custom DS.
>> What could below exception be about: (I dont see any of my code line in
>> the stack trace)
>>
>> java.lang.ArrayIndexOutOfBoundsException: -2
>>         at java.util.ArrayList.get(Unknown Source)
>>         at
>> com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
>>         at
>> com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:773)
>>         at
>> com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:727)
>>         at
>> org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:106)
>>         at
>> org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:101)
>>         at
>> org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
>>         at scala.collection.Iterator$$anon$21.hasNext(Iterator.scala:440)
>>         at
>> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:26)
>>         at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>>         at
>> org.apache.spark.util.CompletionIterator.foreach(CompletionIterator.scala:23)
>>         at
>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>>         at
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:102)
>>         at
>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:250)
>>         at
>> org.apache.spark.util.CompletionIterator.toBuffer(CompletionIterator.scala:23)
>>         at
>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:237)
>>         at
>> org.apache.spark.util.CompletionIterator.toArray(CompletionIterator.scala:23)
>>         at
>> org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:44)
>>         at
>> org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:43)
>>         at
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:36)
>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)
>>         at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:29)
>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)
>>         at org.apache.spark.scheduler.ResultTask.run(ResultTask.scala:99)
>>         at
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:158)
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
>> Source)
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
>> Source)
>>         at java.lang.Thread.run(Unknown Source)
>>
>> Any ideas? or Suggestions would help.
>>
>> Thanks,
>> Archit.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Spark Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to spark-users+unsubscribe@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>

Re: Not able to understand Exception.

Posted by Christopher Nguyen <ct...@adatao.com>.
Archit, this occurs in the ResultTask phase, triggered by the call to
sortByKey. Prior to this, your RDD would have been serialized for, e.g.,
shuffling around.

So it looks like Kryo wasn't able to deserialize some part of the RDD for
some reason, possible due to formatting incompatibility. Did you say you
wrote your own serializers?

--
Christopher T. Nguyen
Co-founder & CEO, Adatao <http://adatao.com>
linkedin.com/in/ctnguyen



On Wed, Jan 1, 2014 at 8:22 AM, Archit Thakur <ar...@gmail.com>wrote:

> I have recently moved to Kryo for serialization to get better performance.
> Have written some of the serializers for my custom DS.
> What could below exception be about: (I dont see any of my code line in
> the stack trace)
>
> java.lang.ArrayIndexOutOfBoundsException: -2
>         at java.util.ArrayList.get(Unknown Source)
>         at
> com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
>         at
> com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:773)
>         at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:727)
>         at
> org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:106)
>         at
> org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:101)
>         at
> org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
>         at scala.collection.Iterator$$anon$21.hasNext(Iterator.scala:440)
>         at
> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:26)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>         at
> org.apache.spark.util.CompletionIterator.foreach(CompletionIterator.scala:23)
>         at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>         at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:102)
>         at
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:250)
>         at
> org.apache.spark.util.CompletionIterator.toBuffer(CompletionIterator.scala:23)
>         at
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:237)
>         at
> org.apache.spark.util.CompletionIterator.toArray(CompletionIterator.scala:23)
>         at
> org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:44)
>         at
> org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:43)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:36)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)
>         at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:29)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)
>         at org.apache.spark.scheduler.ResultTask.run(ResultTask.scala:99)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:158)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
> Source)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
>         at java.lang.Thread.run(Unknown Source)
>
> Any ideas? or Suggestions would help.
>
> Thanks,
> Archit.
>