You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Patrick Wendell (JIRA)" <ji...@apache.org> on 2014/11/01 23:22:34 UTC

[jira] [Resolved] (SPARK-4121) Master build failures after shading commons-math3

     [ https://issues.apache.org/jira/browse/SPARK-4121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Patrick Wendell resolved SPARK-4121.
------------------------------------
    Resolution: Fixed

Okay I merged this. Let's see how it goes.

> Master build failures after shading commons-math3
> -------------------------------------------------
>
>                 Key: SPARK-4121
>                 URL: https://issues.apache.org/jira/browse/SPARK-4121
>             Project: Spark
>          Issue Type: Bug
>          Components: Build, MLlib, Spark Core
>    Affects Versions: 1.2.0
>            Reporter: Xiangrui Meng
>            Assignee: Xiangrui Meng
>            Priority: Blocker
>
> The Spark master Maven build kept failing after we replace colt with commons-math3 and shade the latter:
> https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/
> The error message is:
> {code}
> KMeansClusterSuite:
> Spark assembly has been built with Hive, including Datanucleus jars on classpath
> Spark assembly has been built with Hive, including Datanucleus jars on classpath
> - task size should be small in both training and prediction *** FAILED ***
>   org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1.0 failed 4 times, most recent failure: Lost task 1.3 in stage 1.0 (TID 9, localhost): java.io.InvalidClassException: org.apache.spark.util.random.PoissonSampler; local class incompatible: stream classdesc serialVersionUID = -795011761847245121, local class serialVersionUID = 4249244967777318419
>         java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:617)
>         java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
>         java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
>         java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
>         java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>         java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>         java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>         java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>         java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>         java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>         java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>         java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>         java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>         java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
>         org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
>         org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
>         org.apache.spark.scheduler.Task.run(Task.scala:56)
>         org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:186)
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         java.lang.Thread.run(Thread.java:745)
> {code}
> This test passed in local sbt build. So the issue should be caused by shading. Maybe there are two versions of commons-math3 (hadoop depends on it), or MLlib doesn't use the shaded version at compile.
> [~srowen] Could you take a look? Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org