You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Beam JIRA Bot (Jira)" <ji...@apache.org> on 2021/02/05 17:15:04 UTC
[jira] [Commented] (BEAM-11498) Spark integration tests on Go SDK
failing.
[ https://issues.apache.org/jira/browse/BEAM-11498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17279826#comment-17279826 ]
Beam JIRA Bot commented on BEAM-11498:
--------------------------------------
This issue is assigned but has not received an update in 30 days so it has been labeled "stale-assigned". If you are still working on the issue, please give an update and remove the label. If you are no longer working on the issue, please unassign so someone else may work on it. In 7 days the issue will be automatically unassigned.
> Spark integration tests on Go SDK failing.
> ------------------------------------------
>
> Key: BEAM-11498
> URL: https://issues.apache.org/jira/browse/BEAM-11498
> Project: Beam
> Issue Type: Bug
> Components: cross-language, sdk-go
> Reporter: Daniel Oliveira
> Assignee: Daniel Oliveira
> Priority: P2
> Labels: stale-assigned
>
> Configuration that it's failing in:
> * Go Pipelines
> * Spark Runner Job Server (Java)
> * Java Test Expansion Service
> Specifically, it's failing when being run through the new ValidatesRunner framework.
> Edit: Looks like TestParDoSideInput and TestParDoKVSideInput are also failing with the same error, so this seems like it's not a cross language issue.
> Error:
> {noformat}
> 20/12/16 20:46:44 ERROR org.apache.beam.runners.jobsubmission.JobInvocation: Error during job invocation go0job0401608180402898378125-danoliveira-1217044644-6d1b1ec6_da728c2e-3dd7-4420-8fae-1cd3a47094b1.
> java.lang.IllegalArgumentException: Unsupported class file major version 55
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:166)
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:148)
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:136)
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:237)
> at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:50)
> at org.apache.spark.util.FieldAccessFinder$$anon$4$$anonfun$visitMethodInsn$7.apply(ClosureCleaner.scala:845)
> at org.apache.spark.util.FieldAccessFinder$$anon$4$$anonfun$visitMethodInsn$7.apply(ClosureCleaner.scala:828)
> at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
> at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
> at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134)
> at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at org.apache.spark.util.FieldAccessFinder$$anon$4.visitMethodInsn(ClosureCleaner.scala:828)
> at org.apache.xbean.asm6.ClassReader.readCode(ClassReader.java:2175)
> at org.apache.xbean.asm6.ClassReader.readMethod(ClassReader.java:1238)
> at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:631)
> at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:355)
> at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:272)
> at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:271)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:271)
> at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:163)
> at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2100)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
> at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
> at org.apache.spark.rdd.RDD.collect(RDD.scala:989)
> at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:361)
> at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45)
> at org.apache.beam.runners.spark.translation.BoundedDataset.getBytes(BoundedDataset.java:79)
> at org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.broadcastSideInput(SparkBatchPortablePipelineTranslator.java:363)
> at org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.broadcastSideInputs(SparkBatchPortablePipelineTranslator.java:347)
> at org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.translateExecutableStage(SparkBatchPortablePipelineTranslator.java:225)
> at org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.translate(SparkBatchPortablePipelineTranslator.java:147)
> 2020/12/16 20:46:44 (): java.lang.IllegalArgumentException: Unsupported class file major version 55
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:166)
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:148)
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:136)
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:237)
> at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:50)
> at org.apache.beam.runners.spark.SparkPipelineRunner.lambda$run$2(SparkPipelineRunner.java:196)
> at org.apache.spark.util.FieldAccessFinder$$anon$4$$anonfun$visitMethodInsn$7.apply(ClosureCleaner.scala:845)
> at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at org.apache.spark.util.FieldAccessFinder$$anon$4$$anonfun$visitMethodInsn$7.apply(ClosureCleaner.scala:828)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:835)
> at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
> at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
> at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134)
> at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at org.apache.spark.util.FieldAccessFinder$$anon$4.visitMethodInsn(ClosureCleaner.scala:828)
> at org.apache.xbean.asm6.ClassReader.readCode(ClassReader.java:2175)
> at org.apache.xbean.asm6.ClassReader.readMethod(ClassReader.java:1238)
> at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:631)
> at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:355)
> at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:272)
> at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:271)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:271)
> at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:163)
> at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2100)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
> at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
> at org.apache.spark.rdd.RDD.collect(RDD.scala:989)
> at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:361)
> at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45)
> at org.apache.beam.runners.spark.translation.BoundedDataset.getBytes(BoundedDataset.java:79)
> at org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.broadcastSideInput(SparkBatchPortablePipelineTranslator.java:363)
> at org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.broadcastSideInputs(SparkBatchPortablePipelineTranslator.java:347)
> at org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.translateExecutableStage(SparkBatchPortablePipelineTranslator.java:225)
> at org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.translate(SparkBatchPortablePipelineTranslator.java:147)
> at org.apache.beam.runners.spark.SparkPipelineRunner.lambda$run$2(SparkPipelineRunner.java:196)
> at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:835)
> {noformat}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)