You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by GitBox <gi...@apache.org> on 2021/06/25 12:54:59 UTC

[GitHub] [dolphinscheduler] chengshiwen commented on issue #5648: [Feature][SparkTask] Spark/Flink Task can run on kubernets

chengshiwen commented on issue #5648:
URL: https://github.com/apache/dolphinscheduler/issues/5648#issuecomment-868479357


   @geosmart @blackberrier 
   I think the following things to be considered:
   1. extend `SparkArgsUtils.java` and `spark.vue`
   2. adapt to geting the spark job state instead of calling `isSuccessOfYarnState` and `yarn rest api`, using `<driver-pod-name>`
   3. adapt to kill spark job instedad of `yarn application -kill`, using `<driver-pod-name>`
   
   Things that should not be done by us:
   1. change dolphinscheudler from hdfs to minio (use pvc, or user should solve it)
   2. yarn log aggregation (third-party framework should solve it)
   3. build spark docker images (not our duty)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org