You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by GitBox <gi...@apache.org> on 2020/06/13 12:24:55 UTC

[GitHub] [incubator-dolphinscheduler] zengqinchris opened a new issue #2967: 执行任务一直报错,以orc格式写入报错

zengqinchris opened a new issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967


   `key[key_ods_live_lesson_log] prepare already! job start ....
   	Hive Session ID = b73c6e67-f22d-4121-ad03-6535670fc19c
   [INFO] 2020-06-13 20:19:29.354  - [taskAppId=TASK-13-397-520]:[106] -  -> 20/06/13 20:19:29 WARN SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
   [INFO] 2020-06-13 20:19:33.006  - [taskAppId=TASK-13-397-520]:[106] -  -> 20/06/13 20:19:33 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 2, bigdata-5.wumii.net, executor 2): java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch(I)Lorg/apache/orc/storage/ql/exec/vector/VectorizedRowBatch;
   [INFO] 2020-06-13 20:19:33.264  - [taskAppId=TASK-13-397-520]:[106] -  -> 	at org.apache.spark.sql.execution.datasources.orc.OrcColumnarBatchReader.initBatch(OrcColumnarBatchReader.java:151)
   		at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat$$anonfun$buildReaderWithPartitionValues$2.apply(OrcFileFormat.scala:197)
   		at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat$$anonfun$buildReaderWithPartitionValues$2.apply(OrcFileFormat.scala:160)
   		at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:128)
   		at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
   		at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
   		at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source)
   		at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(Unknown Source)
   		at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
   		at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
   		at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
   		at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
   		at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
   		at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
   		at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
   		at org.apache.spark.scheduler.Task.run(Task.scala:109)
   		at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
   		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   		at java.lang.Thread.run(Thread.java:748)
   	
   	20/06/13 20:19:33 ERROR TaskSetManager: Task 1 in stage 0.0 failed 4 times; aborting job
   	~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   	org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 22, bigdata-5.wumii.net, executor 2): java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch(I)Lorg/apache/orc/storage/ql/exec/vector/VectorizedRowBatch;
   		at org.apache.spark.sql.execution.datasources.orc.OrcColumnarBatchReader.initBatch(OrcColumnarBatchReader.java:151)
   		at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat$$anonfun$buildReaderWithPartitionValues$2.apply(OrcFileFormat.scala:197)
   		at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat$$anonfun$buildReaderWithPartitionValues$2.apply(OrcFileFormat.scala:160)
   		at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:128)
   		at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
   		at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
   		at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source)
   		at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(Unknown Source)
   		at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
   		at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
   		at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
   		at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
   		at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
   		at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
   		at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
   		at org.apache.spark.scheduler.Task.run(Task.scala:109)
   		at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
   		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   		at java.lang.Thread.run(Thread.java:748)
   	
   	Driver stacktrace:
   		at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
   		at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
   		at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
   		at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   		at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
   		at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
   		at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
   		at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
   		at scala.Option.foreach(Option.scala:257)
   		at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
   		at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
   		at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
   		at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
   		at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
   		at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
   		at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
   		at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
   		at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
   [INFO] 2020-06-13 20:19:33.426  - [taskAppId=TASK-13-397-520]:[106] -  -> 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
   		at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
   		at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
   		at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
   		at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
   		at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
   		at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
   		at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2775)
   		at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2774)
   		at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
   		at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
   		at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
   		at org.apache.spark.sql.Dataset.count(Dataset.scala:2774)
   		at com.wumii.analysis.data.center.JobBase.createDF(JobBase.scala:148)
   		at com.wumii.analysis.data.center.ods.live.LiveLessonLog.before(LiveLessonLog.scala:42)
   		at com.wumii.analysis.data.center.JobBase.run(JobBase.scala:43)
   		at com.wumii.analysis.data.center.SparkJobLaunch$.runJob(SparkJobLaunch.scala:315)
   		at com.wumii.analysis.data.center.SparkJobLaunch$.main(SparkJobLaunch.scala:44)
   		at com.wumii.analysis.data.center.SparkJobLaunch.main(SparkJobLaunch.scala)
   		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   		at java.lang.reflect.Method.invoke(Method.java:498)
   		at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
   		at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)
   		at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
   		at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
   		at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
   		at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   	Caused by: java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch(I)Lorg/apache/orc/storage/ql/exec/vector/VectorizedRowBatch;
   		at org.apache.spark.sql.execution.datasources.orc.OrcColumnarBatchReader.initBatch(OrcColumnarBatchReader.java:151)
   		at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat$$anonfun$buildReaderWithPartitionValues$2.apply(OrcFileFormat.scala:197)
   		at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat$$anonfun$buildReaderWithPartitionValues$2.apply(OrcFileFormat.scala:160)
   		at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:128)
   		at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
   		at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
   		at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source)
   		at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(Unknown Source)
   		at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
   		at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
   		at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
   		at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
   		at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
   		at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
   		at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
   		at org.apache.spark.scheduler.Task.run(Task.scala:109)
   		at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
   		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   		at java.lang.Thread.run(Thread.java:748)
   	()
   	~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   	task[key_ods_live_lesson_log] run elapse(s):6
   	20/06/13 20:19:33 WARN TaskSetManager: Lost task 4.0 in stage 0.0 (TID 6, bigdata-1.wumii.net, executor 1): TaskKilled (Stage cancelled)
   	20/06/13 20:19:33 WARN TaskSetManager: Lost task 7.0 in stage 0.0 (TID 9, bigdata-1.wumii.net, executor 1): TaskKilled (Stage cancelled)
   	20/06/13 20:19:33 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, bigdata-1.wumii.net, executor 1): TaskKilled (Stage cancelled)
   	20/06/13 20:19:33 WARN TaskSetManager: Lost task 10.0 in stage 0.0 (TID 14, bigdata-1.wumii.net, executor 1): TaskKilled (Stage cancelled)
   	20/06/13 20:19:33 WARN TaskSetManager: Lost task 8.0 in stage 0.0 (TID 12, bigdata-1.wumii.net, executor 1): TaskKilled (Stage cancelled)
   	20/06/13 20:19:33 WARN TaskSetManager: Lost task 2.0 in stage 0.0 (TID 3, bigdata-1.wumii.net, executor 1): TaskKilled (Stage cancelled)
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_live_course_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   [INFO] 2020-06-13 20:19:34.489  - [taskAppId=TASK-13-397-520]:[106] -  -> key[key_ods_user_live_course_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_user_live_lesson_participation_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_user_live_lesson_practice_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_user_live_lesson_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_reply_task_assignment_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_community_comment_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_community_post_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_evaluation_result_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_evaluation_task_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_question_answer_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_skill_test_question_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_dim_live_lesson_course_category] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_dw_user_watch_live] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_dw_user_community_comment] job exec completed
   	================================================================================
   [INFO] 2020-06-13 20:19:34.762  - [taskAppId=TASK-13-397-520]:[106] -  -> >>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_user_payment_new_log] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_ods_subdivision_capability_base_data] job exec completed
   	================================================================================
   	>>> etlDate:2020-06-13,nowDate:20200613
   	key[key_report_subdivision_capability] job exec completed
   	20/06/13 20:19:34 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
   	org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
   		at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
   		at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:140)
   		at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:655)
   		at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:208)
   		at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:113)
   		at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
   		at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
   		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
   		at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
   		at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
   		at java.lang.Thread.run(Thread.java:748)
   	20/06/13 20:19:34 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
   	org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
   		at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
   		at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:140)
   		at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:655)
   		at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:208)
   		at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:113)
   		at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   [INFO] 2020-06-13 20:19:34.764  - [taskAppId=TASK-13-397-520]:[106] -  -> 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
   		at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
   		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
   		at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
   		at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
   		at java.lang.Thread.run(Thread.java:748)
   	20/06/13 20:19:34 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
   	org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
   		at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
   		at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:140)
   		at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:655)
   		at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:208)
   		at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:113)
   		at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
   		at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
   		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
   		at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
   		at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
   		at java.lang.Thread.run(Thread.java:748)
   	20/06/13 20:19:34 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
   	org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
   		at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
   		at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:140)
   		at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:655)
   		at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:208)
   		at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:113)
   		at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   [INFO] 2020-06-13 20:19:35.321  - [taskAppId=TASK-13-397-520]:[106] -  -> 	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
   		at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
   		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
   		at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
   		at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
   		at java.lang.Thread.run(Thread.java:748)
   	20/06/13 20:19:34 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
   	org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
   		at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
   		at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:140)
   		at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:655)
   		at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:208)
   		at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:113)
   		at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   		at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   		at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
   		at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
   		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
   		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
   		at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
   		at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
   		at java.lang.Thread.run(Thread.java:748)`
   
   
   
   有没有同款错误的


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] zengqinchris closed issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
zengqinchris closed issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] zengqinchris commented on issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
zengqinchris commented on issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967#issuecomment-643738796


   没有解决,点击错了关闭问题按钮


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] yangyichao-mango commented on issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
yangyichao-mango commented on issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967#issuecomment-643734052


   Looking at the log, it may be an error in spark program. You can first try to run the spark task on the machine to see if it can run normally.
   
   Make sure the spark program works properly.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] lijufeng2016 commented on issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
lijufeng2016 commented on issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967#issuecomment-643622172


   你这看的像是jar包冲突


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] zengqinchris commented on issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
zengqinchris commented on issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967#issuecomment-643754330


   orc-core-1.4.4-nohive.jar,orc-core-1.5.1.3.1.0.0-78.jar,应该是这2个jar包冲突了,感谢老铁的回复


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] zengqinchris commented on issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
zengqinchris commented on issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967#issuecomment-643739091


   > > > @zengqinchris conflict jars ?
   > 
   > yes,i am ensure it's your spark env cause this exception but not DS
   
   主要我是在本地用corntab的方式调度,可以执行的,


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] gabrywu commented on issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
gabrywu commented on issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967#issuecomment-643622927


   @zengqinchris conflict jars ?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] zengqinchris commented on issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
zengqinchris commented on issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967#issuecomment-643738724


   > @zengqinchris conflict jars ?
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] zengqinchris commented on issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
zengqinchris commented on issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967#issuecomment-643739041


   > @zengqinchris conflict jars ?
   
   能不能给一个大概的方向,大概是哪些jar冲突了
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] lijufeng2016 commented on issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
lijufeng2016 commented on issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967#issuecomment-643739002


   > > @zengqinchris conflict jars ?
   
   yes,i am ensure it's your spark env cause this exception but not DS


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] zengqinchris commented on issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
zengqinchris commented on issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967#issuecomment-643738698


   > 你这看的像是jar包冲突
   我在本地用命令执行不会报错,在ds上调度就报错
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-dolphinscheduler] zengqinchris closed issue #2967: 执行任务一直报错,以orc格式写入报错

Posted by GitBox <gi...@apache.org>.
zengqinchris closed issue #2967:
URL: https://github.com/apache/incubator-dolphinscheduler/issues/2967


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org