You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2018/05/21 03:06:00 UTC

[jira] [Resolved] (SPARK-24302) when using spark persist(),"KryoException:IndexOutOfBoundsException" happens

     [ https://issues.apache.org/jira/browse/SPARK-24302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-24302.
----------------------------------
    Resolution: Invalid

Sounds like a question and not clear if it's a Spark issue. Let's ask this to the dev mailing list and leave this resolved for now until it's clear if it's an issue in Spark.

1.6.0 is too old and BTW.

> when using spark persist(),"KryoException:IndexOutOfBoundsException" happens
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-24302
>                 URL: https://issues.apache.org/jira/browse/SPARK-24302
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 1.6.0
>            Reporter: yijukang
>            Priority: Major
>              Labels: apache-spark
>
> my operation is using spark to insert RDD data into Hbase like this:
> ------------------------------
> localData.persist()
>  localData.saveAsNewAPIHadoopDataset(jobConf.getConfiguration)
> --------------------------------------
> this way throw Exception:
>    com.esotericsoftware.kryo.KryoException: java.lang.IndexOutOfBoundsException:index:99, Size:6
> Serialization trace:
>     familyMap (org.apache.hadoop.hbase.client.Put)
>   at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:221)
>    at com.esotericsoftware.kryo.kryo.readClassAndObject(Kryo.java:729)
>    at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:42)
>    at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:33)
>    at com.esotericsoftware.kryo.kryo.readClassAndObject(Kryo.java:729)
>  
> when i deal with this:
> -----------------------------
>  localData.saveAsNewAPIHadoopDataset(jobConf.getConfiguration)
> --------------------------------------
> it works well,what the persist() method did?
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org