You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/06/26 11:16:04 UTC

[jira] [Commented] (SPARK-8502) One character switches into uppercase, causing failures [serialization? shuffle?]

    [ https://issues.apache.org/jira/browse/SPARK-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14602607#comment-14602607 ] 

Sean Owen commented on SPARK-8502:
----------------------------------

I feel like I have seen that before and there's a JIRA about it, but can't find it. Worth checking into JIRA more deeply. I do not know anything about the cause or any resolution or whether it's a spark issue

> One character switches into uppercase, causing failures [serialization? shuffle?]
> ---------------------------------------------------------------------------------
>
>                 Key: SPARK-8502
>                 URL: https://issues.apache.org/jira/browse/SPARK-8502
>             Project: Spark
>          Issue Type: Bug
>          Components: Shuffle, Spark Core
>    Affects Versions: 1.3.1
>            Reporter: vidmantas zemleris
>              Labels: serialization
>
> This seem to be a weird random and hard to debug issue, when one character changes the case (character is same). But we're seeing it in our 2h+ workflow every 3rd time we run it.
> One example:
> {quote}
> com.esotericsoftware.kryo.KryoException: Unable to find class: [Lorg.apache.sPark.sql.catalyst.expressions.MutableValue
> Serialization trace:
> values (org.apache.spark.sql.catalyst.expressions.SpecificMutableRow)
>         at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
> {quote}
> notice how `spark` turned into `sPark` (!!!)
> What I tracked down so far, is the same "mistake" is present on multiple executors, so quite likely this bug happens during serialization.
> This also happens for other custom datastructures, like a `case class(m: Map[String, String])` which seem to get deserialized OK, but containing a wrong value.
> we use scala 2.10.4 (same happens with 2.10.5), spark 1.3.1, compiled for CDH 5.3.2 with yarn, with Kryo serializer enabled.
> also we do use algebird  0.10.0 (requiring chill 0.6 vs chill 0.5 used in spark 1.3/1.4 - but I'm pretty sure we've seen the issue with older chill 0.5 too)
> Has anyone else seen the issue? Any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org