You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dmitry Goldenberg (JIRA)" <ji...@apache.org> on 2015/04/26 21:26:39 UTC

[jira] [Commented] (SPARK-7154) Spark distro appears to be pulling in incorrect protobuf classes

    [ https://issues.apache.org/jira/browse/SPARK-7154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14513208#comment-14513208 ] 

Dmitry Goldenberg commented on SPARK-7154:
------------------------------------------

As a workaround, I just tried setting the following
{code}
		SparkConf sparkConf = new SparkConf().setAppName(appName);
		sparkConf.set("spark.executor.userClassPathFirst", "true");
{code}
running in standalone mode, and started getting the following exceptions from Spark while trying to run the jobs with spark-submit:

{code}
15/04/26 15:19:00 ERROR scheduler.JobScheduler: Error running job streaming job 1430075940000 ms.0
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 9.0 failed 1 times, most recent                          failure: Lost task 0.0 in stage 9.0 (TID 9, localhost): java.lang.ClassCastException: cannot assign instance of scala.None$ to field org.apache.spark.scheduler.Task.metrics of type scala.Option in instance of org.apache.spark.scheduler.ResultTask
        at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
        at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1999)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
        at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:94)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:185)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1203)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1191)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1191)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
{code}

I need a solution for either side of the issue. Either Spark needs to pull in the right protobuf classes or this new exception needs to be worked around, otherwise I'm basically dead in the water.

> Spark distro appears to be pulling in incorrect protobuf classes
> ----------------------------------------------------------------
>
>                 Key: SPARK-7154
>                 URL: https://issues.apache.org/jira/browse/SPARK-7154
>             Project: Spark
>          Issue Type: Bug
>          Components: Build
>    Affects Versions: 1.3.0
>            Reporter: Dmitry Goldenberg
>
> If you download Spark via the site: 
> https://spark.apache.org/downloads.html,
> for example I chose:
> http://www.apache.org/dyn/closer.cgi/spark/spark-1.3.1/spark-1.3.1-bin-hadoop2.4.tgz
> then you may see incompatibility with other libraries due to incorrect protobuf classes.
> I'm seeing such a case in my Spark Streaming job which attempts to use Apache Phoenix to update records in HBase. The job is built with with protobuf 2.5.0 dependency. However, at runtime Spark's classes take precedence in class loading and that is causing exceptions such as the following:
> java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
>         at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>         at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>         at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
>         at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
>         at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
>         at org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
>         at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
>         at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
>         at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
>         at org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
>         at org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:237)
>         at org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:231)
>         at org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
>         at org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
>         at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
>         at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
>         at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
>         at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
>         at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>         at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
>         at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
>         at org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
>         at org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
>         at com.kona.core.upload.persistence.hdfshbase.HUploadWorkqueueHelper.updateUploadWorkqueueEntry(HUploadWorkqueueHelper.java:139)
>         at com.kona.core.upload.persistence.hdfshbase.HdfsHbaseUploadPersistenceProvider.updateUploadWorkqueueEntry(HdfsHbaseUploadPersistenceProvider.java:144)
>         at com.kona.pipeline.sparkplug.error.UploadEntryErrorHandlerImpl.onError(UploadEntryErrorHandlerImpl.java:62)
>         at com.kona.pipeline.sparkplug.pipeline.KonaPipelineImpl.processError(KonaPipelineImpl.java:305)
>         at com.kona.pipeline.sparkplug.pipeline.KonaPipelineImpl.processPipelineDocument(KonaPipelineImpl.java:208)
>         at com.kona.pipeline.sparkplug.runner.KonaPipelineRunnerImpl.notifyItemReceived(KonaPipelineRunnerImpl.java:79)
>         at com.kona.pipeline.streaming.spark.ProcessPartitionFunction.call(ProcessPartitionFunction.java:83)
>         at com.kona.pipeline.streaming.spark.ProcessPartitionFunction.call(ProcessPartitionFunction.java:25)
>         at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:198)
>         at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:198)
>         at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:806)
>         at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:806)
>         at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1497)
>         at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1497)
>         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
>         at org.apache.spark.scheduler.Task.run(Task.scala:64)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
>         at org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:1265)
>         at org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:1258)
>         at org.apache.hadoop.hbase.client.HTable$17.call(HTable.java:1608)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> If you look at the protobuf classes inside the Spark assembly jar, they do not match (use the cmp command) the classes in the stock protobuf 2.5.0 jar:
> BoundedByteString$1.class
> BoundedByteString$BoundedByteIterator.class
> BoundedByteString.class
> ByteString$1.class
> ByteString$ByteIterator.class
> ByteString$CodedBuilder.class
> ByteString$Output.class
> ByteString.class
> CodedInputStream.class
> CodedOutputStream$OutOfSpaceException.class
> CodedOutputStream.class
> LiteralByteString$1.class
> LiteralByteString$LiteralByteIterator.class
> LiteralByteString.class
> All of these are dependency classes for HBaseZeroCopyByteString and they're incompatible which explains the java.lang.IllegalAccessError.
> What's not yet clear to me is how they can be wrong if the Spark pom specifies 2.5.0:
>     <profile>
>       <id>hadoop-2.4</id>
>       <properties>
>         <hadoop.version>2.4.0</hadoop.version>
>         <protobuf.version>2.5.0</protobuf.version>
>         <jets3t.version>0.9.3</jets3t.version>
>         <hbase.version>0.98.7-hadoop2</hbase.version>
>         <commons.math3.version>3.1.1</commons.math3.version>
>         <avro.mapred.classifier>hadoop2</avro.mapred.classifier>
>         <codehaus.jackson.version>1.9.13</codehaus.jackson.version>
>       </properties>
>     </profile>
> This looks correct and in theory should override the <protobuf.version>2.4.1</protobuf.version> specified higher up in the parent pom (https://github.com/apache/spark/blob/master/pom.xml).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org