You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Andreas Weise <an...@gmail.com> on 2022/03/09 16:09:44 UTC

RebaseDateTime with dynamicAllocation

Hi,

When playing around with spark.dynamicAllocation.enabled I face the
following error after the first round of executors have been killed.

Py4JJavaError: An error occurred while calling o337.showString. :
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
in stage 18.0 failed 4 times, most recent failure: Lost task 1.3 in stage
18.0 (TID 220) (10.128.6.170 executor 13): java.lang.NoClassDefFoundError:
Could not initialize class
org.apache.spark.sql.catalyst.util.RebaseDateTime$ at
org.apache.spark.sql.catalyst.util.RebaseDateTime.lastSwitchJulianTs(RebaseDateTime.scala)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseTimestamp(ParquetVectorUpdaterFactory.java:1067)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseInt96(ParquetVectorUpdaterFactory.java:1088)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.access$1500(ParquetVectorUpdaterFactory.java:43)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory$BinaryToSQLTimestampRebaseUpdater.decodeSingleDictionaryId(ParquetVectorUpdaterFactory.java:860)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdater.decodeDictionaryIds(ParquetVectorUpdater.java:75)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:216)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:298)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)
at
org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
at
org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
Source) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_1$(Unknown
Source) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown
Source) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
Source) at
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at
org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
at
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131) at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

We tested on Spark 3.2.1 k8s with these dynamicAllocation settings:

spark.dynamicAllocation.enabled=true
spark.dynamicAllocation.maxExecutors=4
spark.dynamicAllocation.minExecutors=1
spark.dynamicAllocation.executorIdleTimeout=30s
spark.dynamicAllocation.shuffleTracking.enabled=true
spark.dynamicAllocation.shuffleTracking.timeout=30s
spark.decommission.enabled=true

Might be related to SPARK-34772 /
https://www.mail-archive.com/commits@spark.apache.org/msg50240.html but as
this was fixed for 3.2.0 it might be worth another issue ?

Best regards
Andreas

Re: RebaseDateTime with dynamicAllocation

Posted by Andreas Weise <an...@gmail.com>.
Okay, found the root cause. Our k8s image got some changes, including a
mess with some jars dependencies around  com.fasterxml.jackson ...

Sorry for the inconvenience.

Some earlier log in the driver contained that info...

[2022-03-09 21:54:25,163] ({task-result-getter-3}
Logging.scala[logWarning]:69) - Lost task 1.0 in stage 0.0 (TID 1)
(10.131.0.221 executor 1): java.lang.ExceptionInInitializerError
at
org.apache.spark.sql.catalyst.util.RebaseDateTime.lastSwitchJulianTs(RebaseDateTime.scala)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseTimestamp(ParquetVectorUpdaterFactory.java:1067)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseInt96(ParquetVectorUpdaterFactory.java:1088)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.access$1500(ParquetVectorUpdaterFactory.java:43)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory$BinaryToSQLTimestampRebaseUpdater.decodeSingleDictionaryId(ParquetVectorUpdaterFactory.java:860)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdater.decodeDictionaryIds(ParquetVectorUpdater.java:75)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:216)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:298)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)
at
org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
at
org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
Source)
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_1$(Unknown
Source)
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown
Source)
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
Source)
at
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at
org.apache.spark.util.random.SamplingUtils$.reservoirSampleAndCount(SamplingUtils.scala:41)
at
org.apache.spark.RangePartitioner$.$anonfun$sketch$1(Partitioner.scala:306)
at
org.apache.spark.RangePartitioner$.$anonfun$sketch$1$adapted(Partitioner.scala:304)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndex$2(RDD.scala:915)
at
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndex$2$adapted(RDD.scala:915)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Scala
module 2.12.3 requires Jackson Databind version >= 2.12.0 and < 2.13.0
at
com.fasterxml.jackson.module.scala.JacksonModule.setupModule(JacksonModule.scala:61)
at
com.fasterxml.jackson.module.scala.JacksonModule.setupModule$(JacksonModule.scala:46)
at
com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:17)
at
com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:751)
at
org.apache.spark.sql.catalyst.util.RebaseDateTime$.loadRebaseRecords(RebaseDateTime.scala:277)
at
org.apache.spark.sql.catalyst.util.RebaseDateTime$.<init>(RebaseDateTime.scala:300)
at
org.apache.spark.sql.catalyst.util.RebaseDateTime$.<clinit>(RebaseDateTime.scala)
... 38 more

On Wed, Mar 9, 2022 at 5:30 PM Andreas Weise <an...@gmail.com>
wrote:

> Full trace doesn't provide any further details. It looks like this:
>
> Py4JJavaError: An error occurred while calling o337.showString. :
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
> in stage 18.0 failed 4 times, most recent failure: Lost task 1.3 in stage
> 18.0 (TID 220) (10.128.6.170 executor 13): java.lang.NoClassDefFoundError:
> Could not initialize class
> org.apache.spark.sql.catalyst.util.RebaseDateTime$ at
> org.apache.spark.sql.catalyst.util.RebaseDateTime.lastSwitchJulianTs(RebaseDateTime.scala)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseTimestamp(ParquetVectorUpdaterFactory.java:1067)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseInt96(ParquetVectorUpdaterFactory.java:1088)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.access$1500(ParquetVectorUpdaterFactory.java:43)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory$BinaryToSQLTimestampRebaseUpdater.decodeSingleDictionaryId(ParquetVectorUpdaterFactory.java:860)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdater.decodeDictionaryIds(ParquetVectorUpdater.java:75)
> at
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:216)
> at
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:298)
> at
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)
> at
> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
> at
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
> at
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)
> at
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
> at
> org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)
> at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
> Source) at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_1$(Unknown
> Source) at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown
> Source) at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
> Source) at
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
> at
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
> at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
> at
> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
> at org.apache.spark.scheduler.Task.run(Task.scala:131) at
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748) Driver stacktrace: at
> org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2454)
> at
> org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2403)
> at
> org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2402)
> at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
> at
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2402)
> at
> org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1160)
> at
> org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1160)
> at scala.Option.foreach(Option.scala:407) at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1160)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2642)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2584)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2573)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:938) at
> org.apache.spark.SparkContext.runJob(SparkContext.scala:2214) at
> org.apache.spark.SparkContext.runJob(SparkContext.scala:2235) at
> org.apache.spark.SparkContext.runJob(SparkContext.scala:2254) at
> org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:476)
> at
> org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:429)
> at
> org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:48)
> at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3715) at
> org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2728) at
> org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3706) at
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
> at
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
> at
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
> at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) at
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
> at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3704) at
> org.apache.spark.sql.Dataset.head(Dataset.scala:2728) at
> org.apache.spark.sql.Dataset.take(Dataset.scala:2935) at
> org.apache.spark.sql.Dataset.getRows(Dataset.scala:287) at
> org.apache.spark.sql.Dataset.showString(Dataset.scala:326) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at
> py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at
> py4j.Gateway.invoke(Gateway.java:282) at
> py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at
> py4j.commands.CallCommand.execute(CallCommand.java:79) at
> py4j.GatewayConnection.run(GatewayConnection.java:238) at
> java.lang.Thread.run(Thread.java:748) Caused by:
> java.lang.NoClassDefFoundError: Could not initialize class
> org.apache.spark.sql.catalyst.util.RebaseDateTime$ at
> org.apache.spark.sql.catalyst.util.RebaseDateTime.lastSwitchJulianTs(RebaseDateTime.scala)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseTimestamp(ParquetVectorUpdaterFactory.java:1067)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseInt96(ParquetVectorUpdaterFactory.java:1088)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.access$1500(ParquetVectorUpdaterFactory.java:43)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory$BinaryToSQLTimestampRebaseUpdater.decodeSingleDictionaryId(ParquetVectorUpdaterFactory.java:860)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdater.decodeDictionaryIds(ParquetVectorUpdater.java:75)
> at
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:216)
> at
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:298)
> at
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)
> at
> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
> at
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
> at
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)
> at
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
> at
> org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)
> at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
> Source) at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_1$(Unknown
> Source) at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown
> Source) at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
> Source) at
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
> at
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
> at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
> at
> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
> at org.apache.spark.scheduler.Task.run(Task.scala:131) at
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ... 1 more
>
> On Wed, Mar 9, 2022 at 5:11 PM Sean Owen <sr...@gmail.com> wrote:
>
>> Doesn't quite seem the same. What is the rest of the error -- why did the
>> class fail to initialize?
>>
>> On Wed, Mar 9, 2022 at 10:08 AM Andreas Weise <an...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> When playing around with spark.dynamicAllocation.enabled I face the
>>> following error after the first round of executors have been killed.
>>>
>>> Py4JJavaError: An error occurred while calling o337.showString. :
>>> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
>>> in stage 18.0 failed 4 times, most recent failure: Lost task 1.3 in stage
>>> 18.0 (TID 220) (10.128.6.170 executor 13): java.lang.NoClassDefFoundError:
>>> Could not initialize class
>>> org.apache.spark.sql.catalyst.util.RebaseDateTime$ at
>>> org.apache.spark.sql.catalyst.util.RebaseDateTime.lastSwitchJulianTs(RebaseDateTime.scala)
>>> at
>>> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseTimestamp(ParquetVectorUpdaterFactory.java:1067)
>>> at
>>> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseInt96(ParquetVectorUpdaterFactory.java:1088)
>>> at
>>> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.access$1500(ParquetVectorUpdaterFactory.java:43)
>>> at
>>> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory$BinaryToSQLTimestampRebaseUpdater.decodeSingleDictionaryId(ParquetVectorUpdaterFactory.java:860)
>>> at
>>> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdater.decodeDictionaryIds(ParquetVectorUpdater.java:75)
>>> at
>>> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:216)
>>> at
>>> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:298)
>>> at
>>> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)
>>> at
>>> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
>>> at
>>> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
>>> at
>>> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)
>>> at
>>> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
>>> at
>>> org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)
>>> at
>>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
>>> Source) at
>>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_1$(Unknown
>>> Source) at
>>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown
>>> Source) at
>>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
>>> Source) at
>>> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>>> at
>>> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
>>> at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at
>>> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
>>> at
>>> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
>>> at
>>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>>> at
>>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
>>> at org.apache.spark.scheduler.Task.run(Task.scala:131) at
>>> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
>>> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at
>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>> at java.lang.Thread.run(Thread.java:748)
>>>
>>> We tested on Spark 3.2.1 k8s with these dynamicAllocation settings:
>>>
>>> spark.dynamicAllocation.enabled=true
>>> spark.dynamicAllocation.maxExecutors=4
>>> spark.dynamicAllocation.minExecutors=1
>>> spark.dynamicAllocation.executorIdleTimeout=30s
>>> spark.dynamicAllocation.shuffleTracking.enabled=true
>>> spark.dynamicAllocation.shuffleTracking.timeout=30s
>>> spark.decommission.enabled=true
>>>
>>> Might be related to SPARK-34772 /
>>> https://www.mail-archive.com/commits@spark.apache.org/msg50240.html but
>>> as this was fixed for 3.2.0 it might be worth another issue ?
>>>
>>> Best regards
>>> Andreas
>>>
>>

Re: RebaseDateTime with dynamicAllocation

Posted by Andreas Weise <an...@gmail.com>.
Full trace doesn't provide any further details. It looks like this:

Py4JJavaError: An error occurred while calling o337.showString. :
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
in stage 18.0 failed 4 times, most recent failure: Lost task 1.3 in stage
18.0 (TID 220) (10.128.6.170 executor 13): java.lang.NoClassDefFoundError:
Could not initialize class
org.apache.spark.sql.catalyst.util.RebaseDateTime$ at
org.apache.spark.sql.catalyst.util.RebaseDateTime.lastSwitchJulianTs(RebaseDateTime.scala)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseTimestamp(ParquetVectorUpdaterFactory.java:1067)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseInt96(ParquetVectorUpdaterFactory.java:1088)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.access$1500(ParquetVectorUpdaterFactory.java:43)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory$BinaryToSQLTimestampRebaseUpdater.decodeSingleDictionaryId(ParquetVectorUpdaterFactory.java:860)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdater.decodeDictionaryIds(ParquetVectorUpdater.java:75)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:216)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:298)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)
at
org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
at
org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
Source) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_1$(Unknown
Source) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown
Source) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
Source) at
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at
org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
at
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131) at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Driver stacktrace: at
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2454)
at
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2403)
at
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2402)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2402)
at
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1160)
at
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1160)
at scala.Option.foreach(Option.scala:407) at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1160)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2642)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2584)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2573)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:938) at
org.apache.spark.SparkContext.runJob(SparkContext.scala:2214) at
org.apache.spark.SparkContext.runJob(SparkContext.scala:2235) at
org.apache.spark.SparkContext.runJob(SparkContext.scala:2254) at
org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:476)
at
org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:429)
at
org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:48)
at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3715) at
org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2728) at
org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3706) at
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
at
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
at
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3704) at
org.apache.spark.sql.Dataset.head(Dataset.scala:2728) at
org.apache.spark.sql.Dataset.take(Dataset.scala:2935) at
org.apache.spark.sql.Dataset.getRows(Dataset.scala:287) at
org.apache.spark.sql.Dataset.showString(Dataset.scala:326) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at
py4j.Gateway.invoke(Gateway.java:282) at
py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at
py4j.commands.CallCommand.execute(CallCommand.java:79) at
py4j.GatewayConnection.run(GatewayConnection.java:238) at
java.lang.Thread.run(Thread.java:748) Caused by:
java.lang.NoClassDefFoundError: Could not initialize class
org.apache.spark.sql.catalyst.util.RebaseDateTime$ at
org.apache.spark.sql.catalyst.util.RebaseDateTime.lastSwitchJulianTs(RebaseDateTime.scala)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseTimestamp(ParquetVectorUpdaterFactory.java:1067)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseInt96(ParquetVectorUpdaterFactory.java:1088)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.access$1500(ParquetVectorUpdaterFactory.java:43)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory$BinaryToSQLTimestampRebaseUpdater.decodeSingleDictionaryId(ParquetVectorUpdaterFactory.java:860)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdater.decodeDictionaryIds(ParquetVectorUpdater.java:75)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:216)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:298)
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)
at
org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
at
org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
Source) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_1$(Unknown
Source) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown
Source) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
Source) at
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at
org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
at
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131) at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more

On Wed, Mar 9, 2022 at 5:11 PM Sean Owen <sr...@gmail.com> wrote:

> Doesn't quite seem the same. What is the rest of the error -- why did the
> class fail to initialize?
>
> On Wed, Mar 9, 2022 at 10:08 AM Andreas Weise <an...@gmail.com>
> wrote:
>
>> Hi,
>>
>> When playing around with spark.dynamicAllocation.enabled I face the
>> following error after the first round of executors have been killed.
>>
>> Py4JJavaError: An error occurred while calling o337.showString. :
>> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
>> in stage 18.0 failed 4 times, most recent failure: Lost task 1.3 in stage
>> 18.0 (TID 220) (10.128.6.170 executor 13): java.lang.NoClassDefFoundError:
>> Could not initialize class
>> org.apache.spark.sql.catalyst.util.RebaseDateTime$ at
>> org.apache.spark.sql.catalyst.util.RebaseDateTime.lastSwitchJulianTs(RebaseDateTime.scala)
>> at
>> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseTimestamp(ParquetVectorUpdaterFactory.java:1067)
>> at
>> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseInt96(ParquetVectorUpdaterFactory.java:1088)
>> at
>> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.access$1500(ParquetVectorUpdaterFactory.java:43)
>> at
>> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory$BinaryToSQLTimestampRebaseUpdater.decodeSingleDictionaryId(ParquetVectorUpdaterFactory.java:860)
>> at
>> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdater.decodeDictionaryIds(ParquetVectorUpdater.java:75)
>> at
>> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:216)
>> at
>> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:298)
>> at
>> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)
>> at
>> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
>> at
>> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
>> at
>> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)
>> at
>> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
>> at
>> org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)
>> at
>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
>> Source) at
>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_1$(Unknown
>> Source) at
>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown
>> Source) at
>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
>> Source) at
>> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>> at
>> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
>> at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at
>> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
>> at
>> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
>> at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>> at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
>> at org.apache.spark.scheduler.Task.run(Task.scala:131) at
>> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
>> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>> at java.lang.Thread.run(Thread.java:748)
>>
>> We tested on Spark 3.2.1 k8s with these dynamicAllocation settings:
>>
>> spark.dynamicAllocation.enabled=true
>> spark.dynamicAllocation.maxExecutors=4
>> spark.dynamicAllocation.minExecutors=1
>> spark.dynamicAllocation.executorIdleTimeout=30s
>> spark.dynamicAllocation.shuffleTracking.enabled=true
>> spark.dynamicAllocation.shuffleTracking.timeout=30s
>> spark.decommission.enabled=true
>>
>> Might be related to SPARK-34772 /
>> https://www.mail-archive.com/commits@spark.apache.org/msg50240.html but
>> as this was fixed for 3.2.0 it might be worth another issue ?
>>
>> Best regards
>> Andreas
>>
>

Re: RebaseDateTime with dynamicAllocation

Posted by Sean Owen <sr...@gmail.com>.
Doesn't quite seem the same. What is the rest of the error -- why did the
class fail to initialize?

On Wed, Mar 9, 2022 at 10:08 AM Andreas Weise <an...@gmail.com>
wrote:

> Hi,
>
> When playing around with spark.dynamicAllocation.enabled I face the
> following error after the first round of executors have been killed.
>
> Py4JJavaError: An error occurred while calling o337.showString. :
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
> in stage 18.0 failed 4 times, most recent failure: Lost task 1.3 in stage
> 18.0 (TID 220) (10.128.6.170 executor 13): java.lang.NoClassDefFoundError:
> Could not initialize class
> org.apache.spark.sql.catalyst.util.RebaseDateTime$ at
> org.apache.spark.sql.catalyst.util.RebaseDateTime.lastSwitchJulianTs(RebaseDateTime.scala)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseTimestamp(ParquetVectorUpdaterFactory.java:1067)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.rebaseInt96(ParquetVectorUpdaterFactory.java:1088)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory.access$1500(ParquetVectorUpdaterFactory.java:43)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdaterFactory$BinaryToSQLTimestampRebaseUpdater.decodeSingleDictionaryId(ParquetVectorUpdaterFactory.java:860)
> at
> org.apache.spark.sql.execution.datasources.parquet.ParquetVectorUpdater.decodeDictionaryIds(ParquetVectorUpdater.java:75)
> at
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:216)
> at
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:298)
> at
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)
> at
> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
> at
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
> at
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)
> at
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)
> at
> org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)
> at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
> Source) at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_1$(Unknown
> Source) at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown
> Source) at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
> Source) at
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
> at
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
> at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
> at
> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
> at org.apache.spark.scheduler.Task.run(Task.scala:131) at
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462) at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509) at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
> We tested on Spark 3.2.1 k8s with these dynamicAllocation settings:
>
> spark.dynamicAllocation.enabled=true
> spark.dynamicAllocation.maxExecutors=4
> spark.dynamicAllocation.minExecutors=1
> spark.dynamicAllocation.executorIdleTimeout=30s
> spark.dynamicAllocation.shuffleTracking.enabled=true
> spark.dynamicAllocation.shuffleTracking.timeout=30s
> spark.decommission.enabled=true
>
> Might be related to SPARK-34772 /
> https://www.mail-archive.com/commits@spark.apache.org/msg50240.html but
> as this was fixed for 3.2.0 it might be worth another issue ?
>
> Best regards
> Andreas
>