You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by li...@apache.org on 2020/03/28 09:46:08 UTC
[incubator-dolphinscheduler] branch dev updated: Support kubernetes
deployment (#2153)
This is an automated email from the ASF dual-hosted git repository.
lidongdai pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-dolphinscheduler.git
The following commit(s) were added to refs/heads/dev by this push:
new f6ca548 Support kubernetes deployment (#2153)
f6ca548 is described below
commit f6ca5480ddc1f2b7a7393b86977f1b5332dcc9b1
Author: liwenhe1993 <32...@users.noreply.github.com>
AuthorDate: Sat Mar 28 17:45:58 2020 +0800
Support kubernetes deployment (#2153)
* Support kubernetes deployment
* Support kubernetes deployment
---
charts/README.md | 226 +++++++++++++
charts/dolphinscheduler/.helmignore | 23 ++
charts/dolphinscheduler/Chart.yaml | 52 +++
charts/dolphinscheduler/README.md | 226 +++++++++++++
charts/dolphinscheduler/templates/NOTES.txt | 44 +++
charts/dolphinscheduler/templates/_helpers.tpl | 149 +++++++++
.../configmap-dolphinscheduler-alert.yaml | 41 +++
.../configmap-dolphinscheduler-master.yaml | 34 ++
.../configmap-dolphinscheduler-worker.yaml | 39 +++
.../deployment-dolphinscheduler-alert.yaml | 228 +++++++++++++
.../templates/deployment-dolphinscheduler-api.yaml | 161 ++++++++++
.../deployment-dolphinscheduler-frontend.yaml | 102 ++++++
charts/dolphinscheduler/templates/ingress.yaml | 43 +++
.../templates/pvc-dolphinscheduler-alert.yaml | 35 ++
.../templates/pvc-dolphinscheduler-api.yaml | 35 ++
.../templates/pvc-dolphinscheduler-frontend.yaml | 35 ++
.../templates/secret-external-postgresql.yaml | 29 ++
.../statefulset-dolphinscheduler-master.yaml | 247 ++++++++++++++
.../statefulset-dolphinscheduler-worker.yaml | 275 ++++++++++++++++
.../templates/svc-dolphinscheduler-api.yaml | 35 ++
.../templates/svc-dolphinscheduler-frontend.yaml | 35 ++
.../svc-dolphinscheduler-master-headless.yaml | 36 +++
.../svc-dolphinscheduler-worker-headless.yaml | 36 +++
charts/dolphinscheduler/values.yaml | 355 +++++++++++++++++++++
24 files changed, 2521 insertions(+)
diff --git a/charts/README.md b/charts/README.md
new file mode 100644
index 0000000..6f0317b
--- /dev/null
+++ b/charts/README.md
@@ -0,0 +1,226 @@
+# Dolphin Scheduler
+
+[Dolphin Scheduler](https://dolphinscheduler.apache.org) is a distributed and easy-to-expand visual DAG workflow scheduling system, dedicated to solving the complex dependencies in data processing, making the scheduling system out of the box for data processing.
+
+## Introduction
+This chart bootstraps a [Dolphin Scheduler](https://dolphinscheduler.apache.org) distributed deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+
+## Prerequisites
+
+- Kubernetes 1.10+
+- PV provisioner support in the underlying infrastructure
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```bash
+$ git clone https://github.com/apache/incubator-dolphinscheduler.git
+$ cd incubator-dolphinscheduler
+$ helm install --name dolphinscheduler .
+```
+These commands deploy Dolphin Scheduler on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+> **Tip**: List all releases using `helm list`
+
+## Uninstalling the Chart
+
+To uninstall/delete the `dolphinscheduler` deployment:
+
+```bash
+$ helm delete --purge dolphinscheduler
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following tables lists the configurable parameters of the Dolphins Scheduler chart and their default values.
+
+| Parameter | Description | Default |
+| --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------- |
+| `timezone` | World time and date for cities in all time zones | `Asia/Shanghai` |
+| `image.registry` | Docker image registry for the Dolphins Scheduler | `docker.io` |
+| `image.repository` | Docker image repository for the Dolphins Scheduler | `dolphinscheduler` |
+| `image.tag` | Docker image version for the Dolphins Scheduler | `1.2.1` |
+| `image.imagePullPolicy` | Image pull policy. One of Always, Never, IfNotPresent | `IfNotPresent` |
+| `imagePullSecrets` | ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images | `[]` |
+| | | |
+| `postgresql.enabled` | If not exists external PostgreSQL, by default, the Dolphins Scheduler will use a internal PostgreSQL | `true` |
+| `postgresql.postgresqlUsername` | The username for internal PostgreSQL | `root` |
+| `postgresql.postgresqlPassword` | The password for internal PostgreSQL | `root` |
+| `postgresql.postgresqlDatabase` | The database for internal PostgreSQL | `dolphinscheduler` |
+| `postgresql.persistence.enabled` | Set `postgresql.persistence.enabled` to `true` to mount a new volume for internal PostgreSQL | `false` |
+| `postgresql.persistence.size` | `PersistentVolumeClaim` Size | `20Gi` |
+| `postgresql.persistence.storageClass` | PostgreSQL data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `externalDatabase.host` | If exists external PostgreSQL, and set `postgresql.enable` value to false. Dolphins Scheduler's database host will use it. | `localhost` |
+| `externalDatabase.port` | If exists external PostgreSQL, and set `postgresql.enable` value to false. Dolphins Scheduler's database port will use it. | `5432` |
+| `externalDatabase.username` | If exists external PostgreSQL, and set `postgresql.enable` value to false. Dolphins Scheduler's database username will use it. | `root` |
+| `externalDatabase.password` | If exists external PostgreSQL, and set `postgresql.enable` value to false. Dolphins Scheduler's database password will use it. | `root` |
+| `externalDatabase.database` | If exists external PostgreSQL, and set `postgresql.enable` value to false. Dolphins Scheduler's database database will use it. | `dolphinscheduler` |
+| | | |
+| `zookeeper.enabled` | If not exists external Zookeeper, by default, the Dolphin Scheduler will use a internal Zookeeper | `true` |
+| `zookeeper.taskQueue` | Specify task queue for `master` and `worker` | `zookeeper` |
+| `zookeeper.persistence.enabled` | Set `zookeeper.persistence.enabled` to `true` to mount a new volume for internal Zookeeper | `false` |
+| `zookeeper.persistence.size` | `PersistentVolumeClaim` Size | `20Gi` |
+| `zookeeper.persistence.storageClass` | Zookeeper data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `externalZookeeper.taskQueue` | If exists external Zookeeper, and set `zookeeper.enable` value to false. Specify task queue for `master` and `worker` | `zookeeper` |
+| `externalZookeeper.zookeeperQuorum` | If exists external Zookeeper, and set `zookeeper.enable` value to false. Specify Zookeeper quorum | `127.0.0.1:2181` |
+| | | |
+| `master.podManagementPolicy` | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down | `Parallel` |
+| `master.replicas` | Replicas is the desired number of replicas of the given Template | `3` |
+| `master.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
+| `master.tolerations` | If specified, the pod's tolerations | `{}` |
+| `master.affinity` | If specified, the pod's scheduling constraints | `{}` |
+| `master.configmap.MASTER_EXEC_THREADS` | Master execute thread num | `100` |
+| `master.configmap.MASTER_EXEC_TASK_NUM` | Master execute task number in parallel | `20` |
+| `master.configmap.MASTER_HEARTBEAT_INTERVAL` | Master heartbeat interval | `10` |
+| `master.configmap.MASTER_TASK_COMMIT_RETRYTIMES` | Master commit task retry times | `5` |
+| `master.configmap.MASTER_TASK_COMMIT_INTERVAL` | Master commit task interval | `1000` |
+| `master.configmap.MASTER_MAX_CPULOAD_AVG` | Only less than cpu avg load, master server can work. default value : the number of cpu cores * 2 | `100` |
+| `master.configmap.MASTER_RESERVED_MEMORY` | Only larger than reserved memory, master server can work. default value : physical memory * 1/10, unit is G | `0.1` |
+| `master.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `master.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
+| `master.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `master.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `master.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `master.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `master.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `master.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `master.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `master.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `master.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `master.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `master.persistentVolumeClaim.enabled` | Set `master.persistentVolumeClaim.enabled` to `true` to mount a new volume for `master` | `false` |
+| `master.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `master.persistentVolumeClaim.storageClassName` | `Master` logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `master.persistentVolumeClaim.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| | | |
+| `worker.podManagementPolicy` | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down | `Parallel` |
+| `worker.replicas` | Replicas is the desired number of replicas of the given Template | `3` |
+| `worker.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
+| `worker.tolerations` | If specified, the pod's tolerations | `{}` |
+| `worker.affinity` | If specified, the pod's scheduling constraints | `{}` |
+| `worker.configmap.WORKER_EXEC_THREADS` | Worker execute thread num | `100` |
+| `worker.configmap.WORKER_HEARTBEAT_INTERVAL` | Worker heartbeat interval | `10` |
+| `worker.configmap.WORKER_FETCH_TASK_NUM` | Submit the number of tasks at a time | `3` |
+| `worker.configmap.WORKER_MAX_CPULOAD_AVG` | Only less than cpu avg load, worker server can work. default value : the number of cpu cores * 2 | `100` |
+| `worker.configmap.WORKER_RESERVED_MEMORY` | Only larger than reserved memory, worker server can work. default value : physical memory * 1/10, unit is G | `0.1` |
+| `worker.configmap.DOLPHINSCHEDULER_DATA_BASEDIR_PATH` | User data directory path, self configuration, please make sure the directory exists and have read write permissions | `/tmp/dolphinscheduler` |
+| `worker.configmap.DOLPHINSCHEDULER_ENV` | System env path, self configuration, please read `values.yaml` | `[]` |
+| `worker.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `worker.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
+| `worker.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `worker.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `worker.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `worker.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `worker.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `worker.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `worker.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `worker.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `worker.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `worker.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `worker.persistentVolumeClaim.enabled` | Set `worker.persistentVolumeClaim.enabled` to `true` to enable `persistentVolumeClaim` for `worker` | `false` |
+| `worker.persistentVolumeClaim.dataPersistentVolume.enabled` | Set `worker.persistentVolumeClaim.dataPersistentVolume.enabled` to `true` to mount a data volume for `worker` | `false` |
+| `worker.persistentVolumeClaim.dataPersistentVolume.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `worker.persistentVolumeClaim.dataPersistentVolume.storageClassName` | `Worker` data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `worker.persistentVolumeClaim.dataPersistentVolume.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| `worker.persistentVolumeClaim.logsPersistentVolume.enabled` | Set `worker.persistentVolumeClaim.logsPersistentVolume.enabled` to `true` to mount a logs volume for `worker` | `false` |
+| `worker.persistentVolumeClaim.logsPersistentVolume.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `worker.persistentVolumeClaim.logsPersistentVolume.storageClassName` | `Worker` logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `worker.persistentVolumeClaim.logsPersistentVolume.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| | | |
+| `alert.strategy.type` | Type of deployment. Can be "Recreate" or "RollingUpdate" | `RollingUpdate` |
+| `alert.strategy.rollingUpdate.maxSurge` | The maximum number of pods that can be scheduled above the desired number of pods | `25%` |
+| `alert.strategy.rollingUpdate.maxUnavailable` | The maximum number of pods that can be unavailable during the update | `25%` |
+| `alert.replicas` | Replicas is the desired number of replicas of the given Template | `1` |
+| `alert.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
+| `alert.tolerations` | If specified, the pod's tolerations | `{}` |
+| `alert.affinity` | If specified, the pod's scheduling constraints | `{}` |
+| `alert.configmap.XLS_FILE_PATH` | XLS file path | `/tmp/xls` |
+| `alert.configmap.MAIL_SERVER_HOST` | Mail `SERVER HOST ` | `nil` |
+| `alert.configmap.MAIL_SERVER_PORT` | Mail `SERVER PORT` | `nil` |
+| `alert.configmap.MAIL_SENDER` | Mail `SENDER` | `nil` |
+| `alert.configmap.MAIL_USER` | Mail `USER` | `nil` |
+| `alert.configmap.MAIL_PASSWD` | Mail `PASSWORD` | `nil` |
+| `alert.configmap.MAIL_SMTP_STARTTLS_ENABLE` | Mail `SMTP STARTTLS` enable | `false` |
+| `alert.configmap.MAIL_SMTP_SSL_ENABLE` | Mail `SMTP SSL` enable | `false` |
+| `alert.configmap.MAIL_SMTP_SSL_TRUST` | Mail `SMTP SSL TRUST` | `nil` |
+| `alert.configmap.ENTERPRISE_WECHAT_ENABLE` | `Enterprise Wechat` enable | `false` |
+| `alert.configmap.ENTERPRISE_WECHAT_CORP_ID` | `Enterprise Wechat` corp id | `nil` |
+| `alert.configmap.ENTERPRISE_WECHAT_SECRET` | `Enterprise Wechat` secret | `nil` |
+| `alert.configmap.ENTERPRISE_WECHAT_AGENT_ID` | `Enterprise Wechat` agent id | `nil` |
+| `alert.configmap.ENTERPRISE_WECHAT_USERS` | `Enterprise Wechat` users | `nil` |
+| `alert.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `alert.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
+| `alert.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `alert.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `alert.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `alert.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `alert.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `alert.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `alert.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `alert.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `alert.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `alert.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `alert.persistentVolumeClaim.enabled` | Set `alert.persistentVolumeClaim.enabled` to `true` to mount a new volume for `alert` | `false` |
+| `alert.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `alert.persistentVolumeClaim.storageClassName` | `Alert` logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `alert.persistentVolumeClaim.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| | | |
+| `api.strategy.type` | Type of deployment. Can be "Recreate" or "RollingUpdate" | `RollingUpdate` |
+| `api.strategy.rollingUpdate.maxSurge` | The maximum number of pods that can be scheduled above the desired number of pods | `25%` |
+| `api.strategy.rollingUpdate.maxUnavailable` | The maximum number of pods that can be unavailable during the update | `25%` |
+| `api.replicas` | Replicas is the desired number of replicas of the given Template | `1` |
+| `api.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
+| `api.tolerations` | If specified, the pod's tolerations | `{}` |
+| `api.affinity` | If specified, the pod's scheduling constraints | `{}` |
+| `api.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `api.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
+| `api.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `api.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `api.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `api.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `api.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `api.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `api.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `api.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `api.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `api.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `api.persistentVolumeClaim.enabled` | Set `api.persistentVolumeClaim.enabled` to `true` to mount a new volume for `api` | `false` |
+| `api.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `api.persistentVolumeClaim.storageClassName` | `api` logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `api.persistentVolumeClaim.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| | | |
+| `frontend.strategy.type` | Type of deployment. Can be "Recreate" or "RollingUpdate" | `RollingUpdate` |
+| `frontend.strategy.rollingUpdate.maxSurge` | The maximum number of pods that can be scheduled above the desired number of pods | `25%` |
+| `frontend.strategy.rollingUpdate.maxUnavailable` | The maximum number of pods that can be unavailable during the update | `25%` |
+| `frontend.replicas` | Replicas is the desired number of replicas of the given Template | `1` |
+| `frontend.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
+| `frontend.tolerations` | If specified, the pod's tolerations | `{}` |
+| `frontend.affinity` | If specified, the pod's scheduling constraints | `{}` |
+| `frontend.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `frontend.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
+| `frontend.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `frontend.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `frontend.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `frontend.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `frontend.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `frontend.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `frontend.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `frontend.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `frontend.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `frontend.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `frontend.persistentVolumeClaim.enabled` | Set `frontend.persistentVolumeClaim.enabled` to `true` to mount a new volume for `frontend` | `false` |
+| `frontend.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `frontend.persistentVolumeClaim.storageClassName` | `frontend` logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `frontend.persistentVolumeClaim.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| | | |
+| `ingress.enabled` | Enable ingress | `false` |
+| `ingress.host` | Ingress host | `dolphinscheduler.org` |
+| `ingress.path` | Ingress path | `/` |
+| `ingress.tls.enabled` | Enable ingress tls | `false` |
+| `ingress.tls.hosts` | Ingress tls hosts | `dolphinscheduler.org` |
+| `ingress.tls.secretName` | Ingress tls secret name | `dolphinscheduler-tls` |
+
+For more information please refer to the [chart](https://github.com/apache/incubator-dolphinscheduler.git) documentation.
diff --git a/charts/dolphinscheduler/.helmignore b/charts/dolphinscheduler/.helmignore
new file mode 100644
index 0000000..0e8a0eb
--- /dev/null
+++ b/charts/dolphinscheduler/.helmignore
@@ -0,0 +1,23 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*.orig
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+.vscode/
diff --git a/charts/dolphinscheduler/Chart.yaml b/charts/dolphinscheduler/Chart.yaml
new file mode 100644
index 0000000..2c40f94
--- /dev/null
+++ b/charts/dolphinscheduler/Chart.yaml
@@ -0,0 +1,52 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+apiVersion: v2
+name: dolphinscheduler
+description: Dolphin Scheduler is a distributed and easy-to-expand visual DAG workflow scheduling system, dedicated to solving the complex dependencies in data processing, making the scheduling system out of the box for data processing.
+home: https://dolphinscheduler.apache.org
+icon: https://dolphinscheduler.apache.org/img/hlogo_colorful.svg
+keywords:
+ - dolphinscheduler
+ - Scheduler
+# A chart can be either an 'application' or a 'library' chart.
+#
+# Application charts are a collection of templates that can be packaged into versioned archives
+# to be deployed.
+#
+# Library charts provide useful utilities or functions for the chart developer. They're included as
+# a dependency of application charts to inject those utilities and functions into the rendering
+# pipeline. Library charts do not define any templates and therefore cannot be deployed.
+type: application
+
+# This is the chart version. This version number should be incremented each time you make changes
+# to the chart and its templates, including the app version.
+version: 0.1.0
+
+# This is the version number of the application being deployed. This version number should be
+# incremented each time you make changes to the application.
+appVersion: 1.2.1
+
+dependencies:
+ - name: postgresql
+ version: 8.x.x
+ repository: https://charts.bitnami.com/bitnami
+ condition: postgresql.enabled
+ - name: zookeeper
+ version: 5.x.x
+ repository: https://charts.bitnami.com/bitnami
+ condition: redis.enabled
diff --git a/charts/dolphinscheduler/README.md b/charts/dolphinscheduler/README.md
new file mode 100644
index 0000000..6f0317b
--- /dev/null
+++ b/charts/dolphinscheduler/README.md
@@ -0,0 +1,226 @@
+# Dolphin Scheduler
+
+[Dolphin Scheduler](https://dolphinscheduler.apache.org) is a distributed and easy-to-expand visual DAG workflow scheduling system, dedicated to solving the complex dependencies in data processing, making the scheduling system out of the box for data processing.
+
+## Introduction
+This chart bootstraps a [Dolphin Scheduler](https://dolphinscheduler.apache.org) distributed deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+
+## Prerequisites
+
+- Kubernetes 1.10+
+- PV provisioner support in the underlying infrastructure
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```bash
+$ git clone https://github.com/apache/incubator-dolphinscheduler.git
+$ cd incubator-dolphinscheduler
+$ helm install --name dolphinscheduler .
+```
+These commands deploy Dolphin Scheduler on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+> **Tip**: List all releases using `helm list`
+
+## Uninstalling the Chart
+
+To uninstall/delete the `dolphinscheduler` deployment:
+
+```bash
+$ helm delete --purge dolphinscheduler
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following tables lists the configurable parameters of the Dolphins Scheduler chart and their default values.
+
+| Parameter | Description | Default |
+| --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------- |
+| `timezone` | World time and date for cities in all time zones | `Asia/Shanghai` |
+| `image.registry` | Docker image registry for the Dolphins Scheduler | `docker.io` |
+| `image.repository` | Docker image repository for the Dolphins Scheduler | `dolphinscheduler` |
+| `image.tag` | Docker image version for the Dolphins Scheduler | `1.2.1` |
+| `image.imagePullPolicy` | Image pull policy. One of Always, Never, IfNotPresent | `IfNotPresent` |
+| `imagePullSecrets` | ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images | `[]` |
+| | | |
+| `postgresql.enabled` | If not exists external PostgreSQL, by default, the Dolphins Scheduler will use a internal PostgreSQL | `true` |
+| `postgresql.postgresqlUsername` | The username for internal PostgreSQL | `root` |
+| `postgresql.postgresqlPassword` | The password for internal PostgreSQL | `root` |
+| `postgresql.postgresqlDatabase` | The database for internal PostgreSQL | `dolphinscheduler` |
+| `postgresql.persistence.enabled` | Set `postgresql.persistence.enabled` to `true` to mount a new volume for internal PostgreSQL | `false` |
+| `postgresql.persistence.size` | `PersistentVolumeClaim` Size | `20Gi` |
+| `postgresql.persistence.storageClass` | PostgreSQL data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `externalDatabase.host` | If exists external PostgreSQL, and set `postgresql.enable` value to false. Dolphins Scheduler's database host will use it. | `localhost` |
+| `externalDatabase.port` | If exists external PostgreSQL, and set `postgresql.enable` value to false. Dolphins Scheduler's database port will use it. | `5432` |
+| `externalDatabase.username` | If exists external PostgreSQL, and set `postgresql.enable` value to false. Dolphins Scheduler's database username will use it. | `root` |
+| `externalDatabase.password` | If exists external PostgreSQL, and set `postgresql.enable` value to false. Dolphins Scheduler's database password will use it. | `root` |
+| `externalDatabase.database` | If exists external PostgreSQL, and set `postgresql.enable` value to false. Dolphins Scheduler's database database will use it. | `dolphinscheduler` |
+| | | |
+| `zookeeper.enabled` | If not exists external Zookeeper, by default, the Dolphin Scheduler will use a internal Zookeeper | `true` |
+| `zookeeper.taskQueue` | Specify task queue for `master` and `worker` | `zookeeper` |
+| `zookeeper.persistence.enabled` | Set `zookeeper.persistence.enabled` to `true` to mount a new volume for internal Zookeeper | `false` |
+| `zookeeper.persistence.size` | `PersistentVolumeClaim` Size | `20Gi` |
+| `zookeeper.persistence.storageClass` | Zookeeper data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `externalZookeeper.taskQueue` | If exists external Zookeeper, and set `zookeeper.enable` value to false. Specify task queue for `master` and `worker` | `zookeeper` |
+| `externalZookeeper.zookeeperQuorum` | If exists external Zookeeper, and set `zookeeper.enable` value to false. Specify Zookeeper quorum | `127.0.0.1:2181` |
+| | | |
+| `master.podManagementPolicy` | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down | `Parallel` |
+| `master.replicas` | Replicas is the desired number of replicas of the given Template | `3` |
+| `master.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
+| `master.tolerations` | If specified, the pod's tolerations | `{}` |
+| `master.affinity` | If specified, the pod's scheduling constraints | `{}` |
+| `master.configmap.MASTER_EXEC_THREADS` | Master execute thread num | `100` |
+| `master.configmap.MASTER_EXEC_TASK_NUM` | Master execute task number in parallel | `20` |
+| `master.configmap.MASTER_HEARTBEAT_INTERVAL` | Master heartbeat interval | `10` |
+| `master.configmap.MASTER_TASK_COMMIT_RETRYTIMES` | Master commit task retry times | `5` |
+| `master.configmap.MASTER_TASK_COMMIT_INTERVAL` | Master commit task interval | `1000` |
+| `master.configmap.MASTER_MAX_CPULOAD_AVG` | Only less than cpu avg load, master server can work. default value : the number of cpu cores * 2 | `100` |
+| `master.configmap.MASTER_RESERVED_MEMORY` | Only larger than reserved memory, master server can work. default value : physical memory * 1/10, unit is G | `0.1` |
+| `master.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `master.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
+| `master.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `master.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `master.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `master.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `master.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `master.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `master.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `master.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `master.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `master.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `master.persistentVolumeClaim.enabled` | Set `master.persistentVolumeClaim.enabled` to `true` to mount a new volume for `master` | `false` |
+| `master.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `master.persistentVolumeClaim.storageClassName` | `Master` logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `master.persistentVolumeClaim.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| | | |
+| `worker.podManagementPolicy` | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down | `Parallel` |
+| `worker.replicas` | Replicas is the desired number of replicas of the given Template | `3` |
+| `worker.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
+| `worker.tolerations` | If specified, the pod's tolerations | `{}` |
+| `worker.affinity` | If specified, the pod's scheduling constraints | `{}` |
+| `worker.configmap.WORKER_EXEC_THREADS` | Worker execute thread num | `100` |
+| `worker.configmap.WORKER_HEARTBEAT_INTERVAL` | Worker heartbeat interval | `10` |
+| `worker.configmap.WORKER_FETCH_TASK_NUM` | Submit the number of tasks at a time | `3` |
+| `worker.configmap.WORKER_MAX_CPULOAD_AVG` | Only less than cpu avg load, worker server can work. default value : the number of cpu cores * 2 | `100` |
+| `worker.configmap.WORKER_RESERVED_MEMORY` | Only larger than reserved memory, worker server can work. default value : physical memory * 1/10, unit is G | `0.1` |
+| `worker.configmap.DOLPHINSCHEDULER_DATA_BASEDIR_PATH` | User data directory path, self configuration, please make sure the directory exists and have read write permissions | `/tmp/dolphinscheduler` |
+| `worker.configmap.DOLPHINSCHEDULER_ENV` | System env path, self configuration, please read `values.yaml` | `[]` |
+| `worker.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `worker.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
+| `worker.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `worker.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `worker.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `worker.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `worker.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `worker.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `worker.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `worker.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `worker.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `worker.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `worker.persistentVolumeClaim.enabled` | Set `worker.persistentVolumeClaim.enabled` to `true` to enable `persistentVolumeClaim` for `worker` | `false` |
+| `worker.persistentVolumeClaim.dataPersistentVolume.enabled` | Set `worker.persistentVolumeClaim.dataPersistentVolume.enabled` to `true` to mount a data volume for `worker` | `false` |
+| `worker.persistentVolumeClaim.dataPersistentVolume.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `worker.persistentVolumeClaim.dataPersistentVolume.storageClassName` | `Worker` data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `worker.persistentVolumeClaim.dataPersistentVolume.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| `worker.persistentVolumeClaim.logsPersistentVolume.enabled` | Set `worker.persistentVolumeClaim.logsPersistentVolume.enabled` to `true` to mount a logs volume for `worker` | `false` |
+| `worker.persistentVolumeClaim.logsPersistentVolume.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `worker.persistentVolumeClaim.logsPersistentVolume.storageClassName` | `Worker` logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `worker.persistentVolumeClaim.logsPersistentVolume.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| | | |
+| `alert.strategy.type` | Type of deployment. Can be "Recreate" or "RollingUpdate" | `RollingUpdate` |
+| `alert.strategy.rollingUpdate.maxSurge` | The maximum number of pods that can be scheduled above the desired number of pods | `25%` |
+| `alert.strategy.rollingUpdate.maxUnavailable` | The maximum number of pods that can be unavailable during the update | `25%` |
+| `alert.replicas` | Replicas is the desired number of replicas of the given Template | `1` |
+| `alert.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
+| `alert.tolerations` | If specified, the pod's tolerations | `{}` |
+| `alert.affinity` | If specified, the pod's scheduling constraints | `{}` |
+| `alert.configmap.XLS_FILE_PATH` | XLS file path | `/tmp/xls` |
+| `alert.configmap.MAIL_SERVER_HOST` | Mail `SERVER HOST ` | `nil` |
+| `alert.configmap.MAIL_SERVER_PORT` | Mail `SERVER PORT` | `nil` |
+| `alert.configmap.MAIL_SENDER` | Mail `SENDER` | `nil` |
+| `alert.configmap.MAIL_USER` | Mail `USER` | `nil` |
+| `alert.configmap.MAIL_PASSWD` | Mail `PASSWORD` | `nil` |
+| `alert.configmap.MAIL_SMTP_STARTTLS_ENABLE` | Mail `SMTP STARTTLS` enable | `false` |
+| `alert.configmap.MAIL_SMTP_SSL_ENABLE` | Mail `SMTP SSL` enable | `false` |
+| `alert.configmap.MAIL_SMTP_SSL_TRUST` | Mail `SMTP SSL TRUST` | `nil` |
+| `alert.configmap.ENTERPRISE_WECHAT_ENABLE` | `Enterprise Wechat` enable | `false` |
+| `alert.configmap.ENTERPRISE_WECHAT_CORP_ID` | `Enterprise Wechat` corp id | `nil` |
+| `alert.configmap.ENTERPRISE_WECHAT_SECRET` | `Enterprise Wechat` secret | `nil` |
+| `alert.configmap.ENTERPRISE_WECHAT_AGENT_ID` | `Enterprise Wechat` agent id | `nil` |
+| `alert.configmap.ENTERPRISE_WECHAT_USERS` | `Enterprise Wechat` users | `nil` |
+| `alert.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `alert.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
+| `alert.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `alert.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `alert.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `alert.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `alert.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `alert.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `alert.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `alert.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `alert.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `alert.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `alert.persistentVolumeClaim.enabled` | Set `alert.persistentVolumeClaim.enabled` to `true` to mount a new volume for `alert` | `false` |
+| `alert.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `alert.persistentVolumeClaim.storageClassName` | `Alert` logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `alert.persistentVolumeClaim.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| | | |
+| `api.strategy.type` | Type of deployment. Can be "Recreate" or "RollingUpdate" | `RollingUpdate` |
+| `api.strategy.rollingUpdate.maxSurge` | The maximum number of pods that can be scheduled above the desired number of pods | `25%` |
+| `api.strategy.rollingUpdate.maxUnavailable` | The maximum number of pods that can be unavailable during the update | `25%` |
+| `api.replicas` | Replicas is the desired number of replicas of the given Template | `1` |
+| `api.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
+| `api.tolerations` | If specified, the pod's tolerations | `{}` |
+| `api.affinity` | If specified, the pod's scheduling constraints | `{}` |
+| `api.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `api.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
+| `api.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `api.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `api.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `api.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `api.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `api.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `api.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `api.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `api.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `api.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `api.persistentVolumeClaim.enabled` | Set `api.persistentVolumeClaim.enabled` to `true` to mount a new volume for `api` | `false` |
+| `api.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `api.persistentVolumeClaim.storageClassName` | `api` logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `api.persistentVolumeClaim.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| | | |
+| `frontend.strategy.type` | Type of deployment. Can be "Recreate" or "RollingUpdate" | `RollingUpdate` |
+| `frontend.strategy.rollingUpdate.maxSurge` | The maximum number of pods that can be scheduled above the desired number of pods | `25%` |
+| `frontend.strategy.rollingUpdate.maxUnavailable` | The maximum number of pods that can be unavailable during the update | `25%` |
+| `frontend.replicas` | Replicas is the desired number of replicas of the given Template | `1` |
+| `frontend.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
+| `frontend.tolerations` | If specified, the pod's tolerations | `{}` |
+| `frontend.affinity` | If specified, the pod's scheduling constraints | `{}` |
+| `frontend.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `frontend.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
+| `frontend.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `frontend.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `frontend.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `frontend.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `frontend.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `frontend.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `frontend.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
+| `frontend.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `frontend.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
+| `frontend.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
+| `frontend.persistentVolumeClaim.enabled` | Set `frontend.persistentVolumeClaim.enabled` to `true` to mount a new volume for `frontend` | `false` |
+| `frontend.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` Access Modes | `[ReadWriteOnce]` |
+| `frontend.persistentVolumeClaim.storageClassName` | `frontend` logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
+| `frontend.persistentVolumeClaim.storage` | `PersistentVolumeClaim` Size | `20Gi` |
+| | | |
+| `ingress.enabled` | Enable ingress | `false` |
+| `ingress.host` | Ingress host | `dolphinscheduler.org` |
+| `ingress.path` | Ingress path | `/` |
+| `ingress.tls.enabled` | Enable ingress tls | `false` |
+| `ingress.tls.hosts` | Ingress tls hosts | `dolphinscheduler.org` |
+| `ingress.tls.secretName` | Ingress tls secret name | `dolphinscheduler-tls` |
+
+For more information please refer to the [chart](https://github.com/apache/incubator-dolphinscheduler.git) documentation.
diff --git a/charts/dolphinscheduler/templates/NOTES.txt b/charts/dolphinscheduler/templates/NOTES.txt
new file mode 100644
index 0000000..eb3a9cf
--- /dev/null
+++ b/charts/dolphinscheduler/templates/NOTES.txt
@@ -0,0 +1,44 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+** Please be patient while the chart is being deployed **
+
+1. Get the Dolphinscheduler URL by running:
+
+{{- if .Values.ingress.enabled }}
+
+ export HOSTNAME=$(kubectl get ingress --namespace {{ .Release.Namespace }} {{ template "dolphinscheduler.fullname" . }} -o jsonpath='{.spec.rules[0].host}')
+ echo "Dolphinscheduler URL: http://$HOSTNAME/"
+
+{{- else }}
+
+ kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "dolphinscheduler.fullname" . }}-frontend 8888:8888
+
+{{- end }}
+
+2. Get the Dolphinscheduler URL by running:
+
+{{- if .Values.ingress.enabled }}
+
+ export HOSTNAME=$(kubectl get ingress --namespace {{ .Release.Namespace }} {{ template "dolphinscheduler.fullname" . }} -o jsonpath='{.spec.rules[0].host}')
+ echo "Dolphinscheduler URL: http://$HOSTNAME/"
+
+{{- else }}
+
+ kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "dolphinscheduler.fullname" . }}-frontend 8888:8888
+
+{{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/_helpers.tpl b/charts/dolphinscheduler/templates/_helpers.tpl
new file mode 100644
index 0000000..37fb034
--- /dev/null
+++ b/charts/dolphinscheduler/templates/_helpers.tpl
@@ -0,0 +1,149 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "dolphinscheduler.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "dolphinscheduler.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "dolphinscheduler.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Common labels
+*/}}
+{{- define "dolphinscheduler.labels" -}}
+helm.sh/chart: {{ include "dolphinscheduler.chart" . }}
+{{ include "dolphinscheduler.selectorLabels" . }}
+{{- if .Chart.AppVersion }}
+app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
+{{- end }}
+app.kubernetes.io/managed-by: {{ .Release.Service }}
+{{- end -}}
+
+{{/*
+Selector labels
+*/}}
+{{- define "dolphinscheduler.selectorLabels" -}}
+app.kubernetes.io/name: {{ include "dolphinscheduler.name" . }}
+app.kubernetes.io/instance: {{ .Release.Name }}
+{{- end -}}
+
+{{/*
+Create the name of the service account to use
+*/}}
+{{- define "dolphinscheduler.serviceAccountName" -}}
+{{- if .Values.serviceAccount.create -}}
+ {{ default (include "dolphinscheduler.fullname" .) .Values.serviceAccount.name }}
+{{- else -}}
+ {{ default "default" .Values.serviceAccount.name }}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create a default docker image registry.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "dolphinscheduler.image.registry" -}}
+{{- $registry := default "docker.io" .Values.image.registry -}}
+{{- printf "%s" $registry | trunc 63 | trimSuffix "/" -}}
+{{- end -}}
+
+{{/*
+Create a default docker image repository.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "dolphinscheduler.image.repository" -}}
+{{- printf "%s/%s:%s" (include "dolphinscheduler.image.registry" .) .Values.image.repository .Values.image.tag -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified postgresql name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "dolphinscheduler.postgresql.fullname" -}}
+{{- $name := default "postgresql" .Values.postgresql.nameOverride -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified zookkeeper name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "dolphinscheduler.zookeeper.fullname" -}}
+{{- $name := default "zookeeper" .Values.zookeeper.nameOverride -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified zookkeeper quorum.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "dolphinscheduler.zookeeper.quorum" -}}
+{{- $port := default "2181" (.Values.zookeeper.service.port | toString) -}}
+{{- printf "%s:%s" (include "dolphinscheduler.zookeeper.fullname" .) $port | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default dolphinscheduler worker base dir.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "dolphinscheduler.worker.base.dir" -}}
+{{- $name := default "/tmp/dolphinscheduler" .Values.worker.configmap.DOLPHINSCHEDULER_DATA_BASEDIR_PATH -}}
+{{- printf "%s" $name | trunc 63 | trimSuffix "/" -}}
+{{- end -}}
+
+{{/*
+Create a default dolphinscheduler worker data download dir.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "dolphinscheduler.worker.data.download.dir" -}}
+{{- printf "%s%s" (include "dolphinscheduler.worker.base.dir" .) "/download" -}}
+{{- end -}}
+
+{{/*
+Create a default dolphinscheduler worker process exec dir.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "dolphinscheduler.worker.process.exec.dir" -}}
+{{- printf "%s%s" (include "dolphinscheduler.worker.base.dir" .) "/exec" -}}
+{{- end -}}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/configmap-dolphinscheduler-alert.yaml b/charts/dolphinscheduler/templates/configmap-dolphinscheduler-alert.yaml
new file mode 100644
index 0000000..76daad8
--- /dev/null
+++ b/charts/dolphinscheduler/templates/configmap-dolphinscheduler-alert.yaml
@@ -0,0 +1,41 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+{{- if .Values.alert.configmap }}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-alert
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+data:
+ XLS_FILE_PATH: {{ .Values.alert.configmap.XLS_FILE_PATH | quote }}
+ MAIL_SERVER_HOST: {{ .Values.alert.configmap.MAIL_SERVER_HOST | quote }}
+ MAIL_SERVER_PORT: {{ .Values.alert.configmap.MAIL_SERVER_PORT | quote }}
+ MAIL_SENDER: {{ .Values.alert.configmap.MAIL_SENDER | quote }}
+ MAIL_USER: {{ .Values.alert.configmap.MAIL_USER | quote }}
+ MAIL_PASSWD: {{ .Values.alert.configmap.MAIL_PASSWD | quote }}
+ MAIL_SMTP_STARTTLS_ENABLE: {{ .Values.alert.configmap.MAIL_SMTP_STARTTLS_ENABLE | quote }}
+ MAIL_SMTP_SSL_ENABLE: {{ .Values.alert.configmap.MAIL_SMTP_SSL_ENABLE | quote }}
+ MAIL_SMTP_SSL_TRUST: {{ .Values.alert.configmap.MAIL_SMTP_SSL_TRUST | quote }}
+ ENTERPRISE_WECHAT_ENABLE: {{ .Values.alert.configmap.ENTERPRISE_WECHAT_ENABLE | quote }}
+ ENTERPRISE_WECHAT_CORP_ID: {{ .Values.alert.configmap.ENTERPRISE_WECHAT_CORP_ID | quote }}
+ ENTERPRISE_WECHAT_SECRET: {{ .Values.alert.configmap.ENTERPRISE_WECHAT_SECRET | quote }}
+ ENTERPRISE_WECHAT_AGENT_ID: {{ .Values.alert.configmap.ENTERPRISE_WECHAT_AGENT_ID | quote }}
+ ENTERPRISE_WECHAT_USERS: {{ .Values.alert.configmap.ENTERPRISE_WECHAT_USERS | quote }}
+{{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/configmap-dolphinscheduler-master.yaml b/charts/dolphinscheduler/templates/configmap-dolphinscheduler-master.yaml
new file mode 100644
index 0000000..8cce068
--- /dev/null
+++ b/charts/dolphinscheduler/templates/configmap-dolphinscheduler-master.yaml
@@ -0,0 +1,34 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+{{- if .Values.master.configmap }}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-master
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-master
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+data:
+ MASTER_EXEC_THREADS: {{ .Values.master.configmap.MASTER_EXEC_THREADS | quote }}
+ MASTER_EXEC_TASK_NUM: {{ .Values.master.configmap.MASTER_EXEC_TASK_NUM | quote }}
+ MASTER_HEARTBEAT_INTERVAL: {{ .Values.master.configmap.MASTER_HEARTBEAT_INTERVAL | quote }}
+ MASTER_TASK_COMMIT_RETRYTIMES: {{ .Values.master.configmap.MASTER_TASK_COMMIT_RETRYTIMES | quote }}
+ MASTER_TASK_COMMIT_INTERVAL: {{ .Values.master.configmap.MASTER_TASK_COMMIT_INTERVAL | quote }}
+ MASTER_MAX_CPULOAD_AVG: {{ .Values.master.configmap.MASTER_MAX_CPULOAD_AVG | quote }}
+ MASTER_RESERVED_MEMORY: {{ .Values.master.configmap.MASTER_RESERVED_MEMORY | quote }}
+{{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/configmap-dolphinscheduler-worker.yaml b/charts/dolphinscheduler/templates/configmap-dolphinscheduler-worker.yaml
new file mode 100644
index 0000000..be7391f
--- /dev/null
+++ b/charts/dolphinscheduler/templates/configmap-dolphinscheduler-worker.yaml
@@ -0,0 +1,39 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+{{- if .Values.worker.configmap }}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-worker
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-worker
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+data:
+ WORKER_EXEC_THREADS: {{ .Values.worker.configmap.WORKER_EXEC_THREADS | quote }}
+ WORKER_HEARTBEAT_INTERVAL: {{ .Values.worker.configmap.WORKER_HEARTBEAT_INTERVAL | quote }}
+ WORKER_FETCH_TASK_NUM: {{ .Values.worker.configmap.WORKER_FETCH_TASK_NUM | quote }}
+ WORKER_MAX_CPULOAD_AVG: {{ .Values.worker.configmap.WORKER_MAX_CPULOAD_AVG | quote }}
+ WORKER_RESERVED_MEMORY: {{ .Values.worker.configmap.WORKER_RESERVED_MEMORY | quote }}
+ DOLPHINSCHEDULER_DATA_BASEDIR_PATH: {{ include "dolphinscheduler.worker.base.dir" . | quote }}
+ DOLPHINSCHEDULER_DATA_DOWNLOAD_BASEDIR_PATH: {{ include "dolphinscheduler.worker.data.download.dir" . | quote }}
+ DOLPHINSCHEDULER_PROCESS_EXEC_BASEPATH: {{ include "dolphinscheduler.worker.process.exec.dir" . | quote }}
+ dolphinscheduler_env.sh: |-
+ {{- range .Values.worker.configmap.DOLPHINSCHEDULER_ENV }}
+ {{ . }}
+ {{- end }}
+{{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/deployment-dolphinscheduler-alert.yaml b/charts/dolphinscheduler/templates/deployment-dolphinscheduler-alert.yaml
new file mode 100644
index 0000000..26026f7
--- /dev/null
+++ b/charts/dolphinscheduler/templates/deployment-dolphinscheduler-alert.yaml
@@ -0,0 +1,228 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-alert
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: alert
+spec:
+ replicas: {{ .Values.alert.replicas }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-alert
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: alert
+ strategy:
+ type: {{ .Values.alert.strategy.type | quote }}
+ rollingUpdate:
+ maxSurge: {{ .Values.alert.strategy.rollingUpdate.maxSurge | quote }}
+ maxUnavailable: {{ .Values.alert.strategy.rollingUpdate.maxUnavailable | quote }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-alert
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: alert
+ spec:
+ {{- if .Values.alert.affinity }}
+ affinity: {{- toYaml .Values.alert.affinity | nindent 8 }}
+ {{- end }}
+ {{- if .Values.alert.nodeSelector }}
+ nodeSelector: {{- toYaml .Values.alert.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.alert.tolerations }}
+ tolerations: {{- toYaml . | nindent 8 }}
+ {{- end }}
+ initContainers:
+ - name: init-postgresql
+ image: busybox:1.31.0
+ command:
+ - /bin/sh
+ - -ec
+ - |
+ while ! nc -z ${POSTGRESQL_HOST} ${POSTGRESQL_PORT}; do
+ counter=$((counter+1))
+ if [ $counter == 5 ]; then
+ echo "Error: Couldn't connect to postgresql."
+ exit 1
+ fi
+ echo "Trying to connect to postgresql at ${POSTGRESQL_HOST}:${POSTGRESQL_PORT}. Attempt $counter."
+ sleep 60
+ done
+ env:
+ - name: POSTGRESQL_HOST
+ {{- if .Values.postgresql.enabled }}
+ value: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.host | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PORT
+ {{- if .Values.postgresql.enabled }}
+ value: "5432"
+ {{- else }}
+ value: {{ .Values.externalDatabase.port }}
+ {{- end }}
+ containers:
+ - name: {{ include "dolphinscheduler.fullname" . }}-alert
+ image: {{ include "dolphinscheduler.image.repository" . | quote }}
+ args:
+ - "alert-server"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ - name: TZ
+ value: {{ .Values.timezone }}
+ - name: XLS_FILE_PATH
+ valueFrom:
+ configMapKeyRef:
+ key: XLS_FILE_PATH
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: MAIL_SERVER_HOST
+ valueFrom:
+ configMapKeyRef:
+ key: MAIL_SERVER_HOST
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: MAIL_SERVER_PORT
+ valueFrom:
+ configMapKeyRef:
+ key: MAIL_SERVER_PORT
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: MAIL_SENDER
+ valueFrom:
+ configMapKeyRef:
+ key: MAIL_SENDER
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: MAIL_USER
+ valueFrom:
+ configMapKeyRef:
+ key: MAIL_USER
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: MAIL_PASSWD
+ valueFrom:
+ configMapKeyRef:
+ key: MAIL_PASSWD
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: MAIL_SMTP_STARTTLS_ENABLE
+ valueFrom:
+ configMapKeyRef:
+ key: MAIL_SMTP_STARTTLS_ENABLE
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: MAIL_SMTP_SSL_ENABLE
+ valueFrom:
+ configMapKeyRef:
+ key: MAIL_SMTP_SSL_ENABLE
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: MAIL_SMTP_SSL_TRUST
+ valueFrom:
+ configMapKeyRef:
+ key: MAIL_SMTP_SSL_TRUST
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: ENTERPRISE_WECHAT_ENABLE
+ valueFrom:
+ configMapKeyRef:
+ key: ENTERPRISE_WECHAT_ENABLE
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: ENTERPRISE_WECHAT_CORP_ID
+ valueFrom:
+ configMapKeyRef:
+ key: ENTERPRISE_WECHAT_CORP_ID
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: ENTERPRISE_WECHAT_SECRET
+ valueFrom:
+ configMapKeyRef:
+ key: ENTERPRISE_WECHAT_SECRET
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: ENTERPRISE_WECHAT_AGENT_ID
+ valueFrom:
+ configMapKeyRef:
+ key: ENTERPRISE_WECHAT_AGENT_ID
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: ENTERPRISE_WECHAT_USERS
+ valueFrom:
+ configMapKeyRef:
+ key: ENTERPRISE_WECHAT_USERS
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ - name: POSTGRESQL_HOST
+ {{- if .Values.postgresql.enabled }}
+ value: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.host | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PORT
+ {{- if .Values.postgresql.enabled }}
+ value: "5432"
+ {{- else }}
+ value: {{ .Values.externalDatabase.port }}
+ {{- end }}
+ - name: POSTGRESQL_USERNAME
+ {{- if .Values.postgresql.enabled }}
+ value: {{ .Values.postgresql.postgresqlUsername }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.username | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ {{- if .Values.postgresql.enabled }}
+ name: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ key: postgresql-password
+ {{- else }}
+ name: {{ printf "%s-%s" .Release.Name "externaldb" }}
+ key: db-password
+ {{- end }}
+ {{- if .Values.alert.livenessProbe.enabled }}
+ livenessProbe:
+ exec:
+ command:
+ - sh
+ - /root/checkpoint.sh
+ - worker-server
+ initialDelaySeconds: {{ .Values.alert.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.alert.livenessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.alert.livenessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.alert.livenessProbe.successThreshold }}
+ failureThreshold: {{ .Values.alert.livenessProbe.failureThreshold }}
+ {{- end }}
+ {{- if .Values.alert.readinessProbe.enabled }}
+ readinessProbe:
+ exec:
+ command:
+ - sh
+ - /root/checkpoint.sh
+ - worker-server
+ initialDelaySeconds: {{ .Values.alert.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.alert.readinessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.alert.readinessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.alert.readinessProbe.successThreshold }}
+ failureThreshold: {{ .Values.alert.readinessProbe.failureThreshold }}
+ {{- end }}
+ volumeMounts:
+ - mountPath: "/opt/dolphinscheduler/logs"
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ volumes:
+ - name: {{ include "dolphinscheduler.fullname" . }}-alert
+ {{- if .Values.alert.persistentVolumeClaim.enabled }}
+ persistentVolumeClaim:
+ claimName: {{ include "dolphinscheduler.fullname" . }}-alert
+ {{- else }}
+ emptyDir: {}
+ {{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/deployment-dolphinscheduler-api.yaml b/charts/dolphinscheduler/templates/deployment-dolphinscheduler-api.yaml
new file mode 100644
index 0000000..926ce3c
--- /dev/null
+++ b/charts/dolphinscheduler/templates/deployment-dolphinscheduler-api.yaml
@@ -0,0 +1,161 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-api
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-api
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: api
+spec:
+ replicas: {{ .Values.api.replicas }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-api
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: api
+ strategy:
+ type: {{ .Values.api.strategy.type | quote }}
+ rollingUpdate:
+ maxSurge: {{ .Values.api.strategy.rollingUpdate.maxSurge | quote }}
+ maxUnavailable: {{ .Values.api.strategy.rollingUpdate.maxUnavailable | quote }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-api
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: api
+ spec:
+ {{- if .Values.api.affinity }}
+ affinity: {{- toYaml .Values.api.affinity | nindent 8 }}
+ {{- end }}
+ {{- if .Values.api.nodeSelector }}
+ nodeSelector: {{- toYaml .Values.api.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.api.tolerations }}
+ tolerations: {{- toYaml . | nindent 8 }}
+ {{- end }}
+ initContainers:
+ - name: init-postgresql
+ image: busybox:1.31.0
+ command:
+ - /bin/sh
+ - -ec
+ - |
+ while ! nc -z ${POSTGRESQL_HOST} ${POSTGRESQL_PORT}; do
+ counter=$((counter+1))
+ if [ $counter == 5 ]; then
+ echo "Error: Couldn't connect to postgresql."
+ exit 1
+ fi
+ echo "Trying to connect to postgresql at ${POSTGRESQL_HOST}:${POSTGRESQL_PORT}. Attempt $counter."
+ sleep 60
+ done
+ env:
+ - name: POSTGRESQL_HOST
+ {{- if .Values.postgresql.enabled }}
+ value: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.host | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PORT
+ {{- if .Values.postgresql.enabled }}
+ value: "5432"
+ {{- else }}
+ value: {{ .Values.externalDatabase.port }}
+ {{- end }}
+ containers:
+ - name: {{ include "dolphinscheduler.fullname" . }}-api
+ image: {{ include "dolphinscheduler.image.repository" . | quote }}
+ args:
+ - "api-server"
+ ports:
+ - containerPort: 12345
+ name: tcp-port
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ - name: TZ
+ value: {{ .Values.timezone }}
+ - name: POSTGRESQL_HOST
+ {{- if .Values.postgresql.enabled }}
+ value: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.host | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PORT
+ {{- if .Values.postgresql.enabled }}
+ value: "5432"
+ {{- else }}
+ value: {{ .Values.externalDatabase.port }}
+ {{- end }}
+ - name: POSTGRESQL_USERNAME
+ {{- if .Values.postgresql.enabled }}
+ value: {{ .Values.postgresql.postgresqlUsername }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.username | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ {{- if .Values.postgresql.enabled }}
+ name: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ key: postgresql-password
+ {{- else }}
+ name: {{ printf "%s-%s" .Release.Name "externaldb" }}
+ key: db-password
+ {{- end }}
+ - name: ZOOKEEPER_QUORUM
+ {{- if .Values.zookeeper.enabled }}
+ value: "{{ template "dolphinscheduler.zookeeper.quorum" . }}"
+ {{- else }}
+ value: {{ .Values.externalZookeeper.zookeeperQuorum }}
+ {{- end }}
+ {{- if .Values.api.livenessProbe.enabled }}
+ livenessProbe:
+ tcpSocket:
+ port: 12345
+ initialDelaySeconds: {{ .Values.api.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.api.livenessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.api.livenessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.api.livenessProbe.successThreshold }}
+ failureThreshold: {{ .Values.api.livenessProbe.failureThreshold }}
+ {{- end }}
+ {{- if .Values.api.readinessProbe.enabled }}
+ readinessProbe:
+ tcpSocket:
+ port: 12345
+ initialDelaySeconds: {{ .Values.api.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.api.readinessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.api.readinessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.api.readinessProbe.successThreshold }}
+ failureThreshold: {{ .Values.api.readinessProbe.failureThreshold }}
+ {{- end }}
+ volumeMounts:
+ - mountPath: "/opt/dolphinscheduler/logs"
+ name: {{ include "dolphinscheduler.fullname" . }}-api
+ volumes:
+ - name: {{ include "dolphinscheduler.fullname" . }}-api
+ {{- if .Values.api.persistentVolumeClaim.enabled }}
+ persistentVolumeClaim:
+ claimName: {{ include "dolphinscheduler.fullname" . }}-api
+ {{- else }}
+ emptyDir: {}
+ {{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/deployment-dolphinscheduler-frontend.yaml b/charts/dolphinscheduler/templates/deployment-dolphinscheduler-frontend.yaml
new file mode 100644
index 0000000..aea09f1
--- /dev/null
+++ b/charts/dolphinscheduler/templates/deployment-dolphinscheduler-frontend.yaml
@@ -0,0 +1,102 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: frontend
+spec:
+ replicas: {{ .Values.frontend.replicas }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: frontend
+ strategy:
+ type: {{ .Values.frontend.strategy.type | quote }}
+ rollingUpdate:
+ maxSurge: {{ .Values.frontend.strategy.rollingUpdate.maxSurge | quote }}
+ maxUnavailable: {{ .Values.frontend.strategy.rollingUpdate.maxUnavailable | quote }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: frontend
+ spec:
+ {{- if .Values.frontend.affinity }}
+ affinity: {{- toYaml .Values.frontend.affinity | nindent 8 }}
+ {{- end }}
+ {{- if .Values.frontend.nodeSelector }}
+ nodeSelector: {{- toYaml .Values.frontend.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.frontend.tolerations }}
+ tolerations: {{- toYaml . | nindent 8 }}
+ {{- end }}
+ containers:
+ - name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ image: {{ include "dolphinscheduler.image.repository" . | quote }}
+ args:
+ - "frontend"
+ ports:
+ - containerPort: 8888
+ name: tcp-port
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ - name: TZ
+ value: {{ .Values.timezone }}
+ - name: FRONTEND_API_SERVER_HOST
+ value: '{{ include "dolphinscheduler.fullname" . }}-api'
+ - name: FRONTEND_API_SERVER_PORT
+ value: "12345"
+ {{- if .Values.frontend.livenessProbe.enabled }}
+ livenessProbe:
+ tcpSocket:
+ port: 8888
+ initialDelaySeconds: {{ .Values.frontend.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.frontend.livenessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.frontend.livenessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.frontend.livenessProbe.successThreshold }}
+ failureThreshold: {{ .Values.frontend.livenessProbe.failureThreshold }}
+ {{- end }}
+ {{- if .Values.frontend.readinessProbe.enabled }}
+ readinessProbe:
+ tcpSocket:
+ port: 8888
+ initialDelaySeconds: {{ .Values.frontend.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.frontend.readinessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.frontend.readinessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.frontend.readinessProbe.successThreshold }}
+ failureThreshold: {{ .Values.frontend.readinessProbe.failureThreshold }}
+ {{- end }}
+ volumeMounts:
+ - mountPath: "/var/log/nginx"
+ name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ volumes:
+ - name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ {{- if .Values.frontend.persistentVolumeClaim.enabled }}
+ persistentVolumeClaim:
+ claimName: {{ include "dolphinscheduler.fullname" . }}-frontend
+ {{- else }}
+ emptyDir: {}
+ {{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/ingress.yaml b/charts/dolphinscheduler/templates/ingress.yaml
new file mode 100644
index 0000000..d0f923d
--- /dev/null
+++ b/charts/dolphinscheduler/templates/ingress.yaml
@@ -0,0 +1,43 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+{{- if .Values.ingress.enabled }}
+apiVersion: networking.k8s.io/v1beta1
+kind: Ingress
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ rules:
+ - host: {{ .Values.ingress.host }}
+ http:
+ paths:
+ - path: {{ .Values.ingress.path }}
+ backend:
+ serviceName: {{ include "dolphinscheduler.fullname" . }}-frontend
+ servicePort: tcp-port
+ {{- if .Values.ingress.tls.enabled }}
+ tls:
+ hosts:
+ {{- range .Values.ingress.tls.hosts }}
+ - {{ . | quote }}
+ {{- end }}
+ secretName: {{ .Values.ingress.tls.secretName }}
+ {{- end }}
+{{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/pvc-dolphinscheduler-alert.yaml b/charts/dolphinscheduler/templates/pvc-dolphinscheduler-alert.yaml
new file mode 100644
index 0000000..7f74cd9
--- /dev/null
+++ b/charts/dolphinscheduler/templates/pvc-dolphinscheduler-alert.yaml
@@ -0,0 +1,35 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+{{- if .Values.alert.persistentVolumeClaim.enabled }}
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-alert
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-alert
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ accessModes:
+ {{- range .Values.alert.persistentVolumeClaim.accessModes }}
+ - {{ . | quote }}
+ {{- end }}
+ storageClassName: {{ .Values.alert.persistentVolumeClaim.storageClassName | quote }}
+ resources:
+ requests:
+ storage: {{ .Values.alert.persistentVolumeClaim.storage | quote }}
+{{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/pvc-dolphinscheduler-api.yaml b/charts/dolphinscheduler/templates/pvc-dolphinscheduler-api.yaml
new file mode 100644
index 0000000..c1074cc
--- /dev/null
+++ b/charts/dolphinscheduler/templates/pvc-dolphinscheduler-api.yaml
@@ -0,0 +1,35 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+{{- if .Values.api.persistentVolumeClaim.enabled }}
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-api
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-api
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ accessModes:
+ {{- range .Values.api.persistentVolumeClaim.accessModes }}
+ - {{ . | quote }}
+ {{- end }}
+ storageClassName: {{ .Values.api.persistentVolumeClaim.storageClassName | quote }}
+ resources:
+ requests:
+ storage: {{ .Values.api.persistentVolumeClaim.storage | quote }}
+{{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/pvc-dolphinscheduler-frontend.yaml b/charts/dolphinscheduler/templates/pvc-dolphinscheduler-frontend.yaml
new file mode 100644
index 0000000..ac9fe02
--- /dev/null
+++ b/charts/dolphinscheduler/templates/pvc-dolphinscheduler-frontend.yaml
@@ -0,0 +1,35 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+{{- if .Values.frontend.persistentVolumeClaim.enabled }}
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ accessModes:
+ {{- range .Values.frontend.persistentVolumeClaim.accessModes }}
+ - {{ . | quote }}
+ {{- end }}
+ storageClassName: {{ .Values.frontend.persistentVolumeClaim.storageClassName | quote }}
+ resources:
+ requests:
+ storage: {{ .Values.frontend.persistentVolumeClaim.storage | quote }}
+{{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/secret-external-postgresql.yaml b/charts/dolphinscheduler/templates/secret-external-postgresql.yaml
new file mode 100644
index 0000000..16d026a
--- /dev/null
+++ b/charts/dolphinscheduler/templates/secret-external-postgresql.yaml
@@ -0,0 +1,29 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+{{- if not .Values.postgresql.enabled }}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ printf "%s-%s" .Release.Name "externaldb" }}
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-postgresql
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+type: Opaque
+data:
+ db-password: {{ .Values.externalDatabase.password | b64enc | quote }}
+{{- end }}
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/statefulset-dolphinscheduler-master.yaml b/charts/dolphinscheduler/templates/statefulset-dolphinscheduler-master.yaml
new file mode 100644
index 0000000..ac97412
--- /dev/null
+++ b/charts/dolphinscheduler/templates/statefulset-dolphinscheduler-master.yaml
@@ -0,0 +1,247 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-master
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-master
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: master
+spec:
+ podManagementPolicy: {{ .Values.master.podManagementPolicy }}
+ replicas: {{ .Values.master.replicas }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-master
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: master
+ serviceName: {{ template "dolphinscheduler.fullname" . }}-master-headless
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-master
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: master
+ spec:
+ {{- if .Values.master.affinity }}
+ affinity: {{- toYaml .Values.master.affinity | nindent 8 }}
+ {{- end }}
+ {{- if .Values.master.nodeSelector }}
+ nodeSelector: {{- toYaml .Values.master.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.master.tolerations }}
+ tolerations: {{- toYaml . | nindent 8 }}
+ {{- end }}
+ initContainers:
+ - name: init-zookeeper
+ image: busybox:1.31.0
+ command:
+ - /bin/sh
+ - -ec
+ - |
+ echo "${ZOOKEEPER_QUORUM}" | awk -F ',' 'BEGIN{ i=1 }{ while( i <= NF ){ print $i; i++ } }' | while read line; do
+ while ! nc -z ${line%:*} ${line#*:}; do
+ counter=$((counter+1))
+ if [ $counter == 5 ]; then
+ echo "Error: Couldn't connect to zookeeper."
+ exit 1
+ fi
+ echo "Trying to connect to zookeeper at ${line}. Attempt $counter."
+ sleep 60
+ done
+ done
+ env:
+ - name: ZOOKEEPER_QUORUM
+ {{- if .Values.zookeeper.enabled }}
+ value: "{{ template "dolphinscheduler.zookeeper.quorum" . }}"
+ {{- else }}
+ value: {{ .Values.externalZookeeper.zookeeperQuorum }}
+ {{- end }}
+ - name: init-postgresql
+ image: busybox:1.31.0
+ command:
+ - /bin/sh
+ - -ec
+ - |
+ while ! nc -z ${POSTGRESQL_HOST} ${POSTGRESQL_PORT}; do
+ counter=$((counter+1))
+ if [ $counter == 5 ]; then
+ echo "Error: Couldn't connect to postgresql."
+ exit 1
+ fi
+ echo "Trying to connect to postgresql at ${POSTGRESQL_HOST}:${POSTGRESQL_PORT}. Attempt $counter."
+ sleep 60
+ done
+ env:
+ - name: POSTGRESQL_HOST
+ {{- if .Values.postgresql.enabled }}
+ value: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.host | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PORT
+ {{- if .Values.postgresql.enabled }}
+ value: "5432"
+ {{- else }}
+ value: {{ .Values.externalDatabase.port }}
+ {{- end }}
+ containers:
+ - name: {{ include "dolphinscheduler.fullname" . }}-master
+ image: {{ include "dolphinscheduler.image.repository" . | quote }}
+ args:
+ - "master-server"
+ ports:
+ - containerPort: 8888
+ name: unused-tcp-port
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ - name: TZ
+ value: {{ .Values.timezone }}
+ - name: MASTER_EXEC_THREADS
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-master
+ key: MASTER_EXEC_THREADS
+ - name: MASTER_EXEC_TASK_NUM
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-master
+ key: MASTER_EXEC_TASK_NUM
+ - name: MASTER_HEARTBEAT_INTERVAL
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-master
+ key: MASTER_HEARTBEAT_INTERVAL
+ - name: MASTER_TASK_COMMIT_RETRYTIMES
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-master
+ key: MASTER_TASK_COMMIT_RETRYTIMES
+ - name: MASTER_TASK_COMMIT_INTERVAL
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-master
+ key: MASTER_TASK_COMMIT_INTERVAL
+ - name: MASTER_MAX_CPULOAD_AVG
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-master
+ key: MASTER_MAX_CPULOAD_AVG
+ - name: MASTER_RESERVED_MEMORY
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-master
+ key: MASTER_RESERVED_MEMORY
+ - name: POSTGRESQL_HOST
+ {{- if .Values.postgresql.enabled }}
+ value: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.host | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PORT
+ {{- if .Values.postgresql.enabled }}
+ value: "5432"
+ {{- else }}
+ value: {{ .Values.externalDatabase.port }}
+ {{- end }}
+ - name: POSTGRESQL_USERNAME
+ {{- if .Values.postgresql.enabled }}
+ value: {{ .Values.postgresql.postgresqlUsername }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.username | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ {{- if .Values.postgresql.enabled }}
+ name: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ key: postgresql-password
+ {{- else }}
+ name: {{ printf "%s-%s" .Release.Name "externaldb" }}
+ key: db-password
+ {{- end }}
+ - name: TASK_QUEUE
+ {{- if .Values.zookeeper.enabled }}
+ value: {{ .Values.zookeeper.taskQueue }}
+ {{- else }}
+ value: {{ .Values.externalZookeeper.taskQueue }}
+ {{- end }}
+ - name: ZOOKEEPER_QUORUM
+ {{- if .Values.zookeeper.enabled }}
+ value: {{ template "dolphinscheduler.zookeeper.quorum" . }}
+ {{- else }}
+ value: {{ .Values.externalZookeeper.zookeeperQuorum }}
+ {{- end }}
+ {{- if .Values.master.livenessProbe.enabled }}
+ livenessProbe:
+ exec:
+ command:
+ - sh
+ - /root/checkpoint.sh
+ - master-server
+ initialDelaySeconds: {{ .Values.master.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.master.livenessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.master.livenessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.master.livenessProbe.successThreshold }}
+ failureThreshold: {{ .Values.master.livenessProbe.failureThreshold }}
+ {{- end }}
+ {{- if .Values.master.readinessProbe.enabled }}
+ readinessProbe:
+ exec:
+ command:
+ - sh
+ - /root/checkpoint.sh
+ - master-server
+ initialDelaySeconds: {{ .Values.master.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.master.readinessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.master.readinessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.master.readinessProbe.successThreshold }}
+ failureThreshold: {{ .Values.master.readinessProbe.failureThreshold }}
+ {{- end }}
+ volumeMounts:
+ - mountPath: "/opt/dolphinscheduler/logs"
+ name: {{ include "dolphinscheduler.fullname" . }}-master
+ volumes:
+ - name: {{ include "dolphinscheduler.fullname" . }}-master
+ {{- if .Values.master.persistentVolumeClaim.enabled }}
+ persistentVolumeClaim:
+ claimName: {{ include "dolphinscheduler.fullname" . }}-master
+ {{- else }}
+ emptyDir: {}
+ {{- end }}
+ {{- if .Values.master.persistentVolumeClaim.enabled }}
+ volumeClaimTemplates:
+ - metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-master
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-master
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ spec:
+ accessModes:
+ {{- range .Values.master.persistentVolumeClaim.accessModes }}
+ - {{ . | quote }}
+ {{- end }}
+ storageClassName: {{ .Values.master.persistentVolumeClaim.storageClassName | quote }}
+ resources:
+ requests:
+ storage: {{ .Values.master.persistentVolumeClaim.storage | quote }}
+ {{- end }}
diff --git a/charts/dolphinscheduler/templates/statefulset-dolphinscheduler-worker.yaml b/charts/dolphinscheduler/templates/statefulset-dolphinscheduler-worker.yaml
new file mode 100644
index 0000000..a240797
--- /dev/null
+++ b/charts/dolphinscheduler/templates/statefulset-dolphinscheduler-worker.yaml
@@ -0,0 +1,275 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-worker
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-worker
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: worker
+spec:
+ podManagementPolicy: {{ .Values.worker.podManagementPolicy }}
+ replicas: {{ .Values.worker.replicas }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-worker
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: worker
+ serviceName: {{ template "dolphinscheduler.fullname" . }}-worker-headless
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-worker
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: worker
+ spec:
+ {{- if .Values.worker.affinity }}
+ affinity: {{- toYaml .Values.worker.affinity | nindent 8 }}
+ {{- end }}
+ {{- if .Values.worker.nodeSelector }}
+ nodeSelector: {{- toYaml .Values.worker.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.worker.tolerations }}
+ tolerations: {{- toYaml . | nindent 8 }}
+ {{- end }}
+ initContainers:
+ - name: init-zookeeper
+ image: busybox:1.31.0
+ command:
+ - /bin/sh
+ - -ec
+ - |
+ echo "${ZOOKEEPER_QUORUM}" | awk -F ',' 'BEGIN{ i=1 }{ while( i <= NF ){ print $i; i++ } }' | while read line; do
+ while ! nc -z ${line%:*} ${line#*:}; do
+ counter=$((counter+1))
+ if [ $counter == 5 ]; then
+ echo "Error: Couldn't connect to zookeeper."
+ exit 1
+ fi
+ echo "Trying to connect to zookeeper at ${line}. Attempt $counter."
+ sleep 60
+ done
+ done
+ env:
+ - name: ZOOKEEPER_QUORUM
+ {{- if .Values.zookeeper.enabled }}
+ value: "{{ template "dolphinscheduler.zookeeper.quorum" . }}"
+ {{- else }}
+ value: {{ .Values.externalZookeeper.zookeeperQuorum }}
+ {{- end }}
+ - name: init-postgresql
+ image: busybox:1.31.0
+ command:
+ - /bin/sh
+ - -ec
+ - |
+ while ! nc -z ${POSTGRESQL_HOST} ${POSTGRESQL_PORT}; do
+ counter=$((counter+1))
+ if [ $counter == 5 ]; then
+ echo "Error: Couldn't connect to postgresql."
+ exit 1
+ fi
+ echo "Trying to connect to postgresql at ${POSTGRESQL_HOST}:${POSTGRESQL_PORT}. Attempt $counter."
+ sleep 60
+ done
+ env:
+ - name: POSTGRESQL_HOST
+ {{- if .Values.postgresql.enabled }}
+ value: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.host | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PORT
+ {{- if .Values.postgresql.enabled }}
+ value: "5432"
+ {{- else }}
+ value: {{ .Values.externalDatabase.port }}
+ {{- end }}
+ containers:
+ - name: {{ include "dolphinscheduler.fullname" . }}-worker
+ image: {{ include "dolphinscheduler.image.repository" . | quote }}
+ args:
+ - "worker-server"
+ ports:
+ - containerPort: 50051
+ name: "logs-port"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ - name: TZ
+ value: {{ .Values.timezone }}
+ - name: WORKER_EXEC_THREADS
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-worker
+ key: WORKER_EXEC_THREADS
+ - name: WORKER_FETCH_TASK_NUM
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-worker
+ key: WORKER_FETCH_TASK_NUM
+ - name: WORKER_HEARTBEAT_INTERVAL
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-worker
+ key: WORKER_HEARTBEAT_INTERVAL
+ - name: WORKER_MAX_CPULOAD_AVG
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-worker
+ key: WORKER_MAX_CPULOAD_AVG
+ - name: WORKER_RESERVED_MEMORY
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "dolphinscheduler.fullname" . }}-worker
+ key: WORKER_RESERVED_MEMORY
+ - name: POSTGRESQL_HOST
+ {{- if .Values.postgresql.enabled }}
+ value: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.host | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PORT
+ {{- if .Values.postgresql.enabled }}
+ value: "5432"
+ {{- else }}
+ value: {{ .Values.externalDatabase.port }}
+ {{- end }}
+ - name: POSTGRESQL_USERNAME
+ {{- if .Values.postgresql.enabled }}
+ value: {{ .Values.postgresql.postgresqlUsername }}
+ {{- else }}
+ value: {{ .Values.externalDatabase.username | quote }}
+ {{- end }}
+ - name: POSTGRESQL_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ {{- if .Values.postgresql.enabled }}
+ name: {{ template "dolphinscheduler.postgresql.fullname" . }}
+ key: postgresql-password
+ {{- else }}
+ name: {{ printf "%s-%s" .Release.Name "externaldb" }}
+ key: db-password
+ {{- end }}
+ - name: TASK_QUEUE
+ {{- if .Values.zookeeper.enabled }}
+ value: {{ .Values.zookeeper.taskQueue }}
+ {{- else }}
+ value: {{ .Values.externalZookeeper.taskQueue }}
+ {{- end }}
+ - name: ZOOKEEPER_QUORUM
+ {{- if .Values.zookeeper.enabled }}
+ value: "{{ template "dolphinscheduler.zookeeper.quorum" . }}"
+ {{- else }}
+ value: {{ .Values.externalZookeeper.zookeeperQuorum }}
+ {{- end }}
+ {{- if .Values.worker.livenessProbe.enabled }}
+ livenessProbe:
+ exec:
+ command:
+ - sh
+ - /root/checkpoint.sh
+ - worker-server
+ initialDelaySeconds: {{ .Values.worker.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.worker.livenessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.worker.livenessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.worker.livenessProbe.successThreshold }}
+ failureThreshold: {{ .Values.worker.livenessProbe.failureThreshold }}
+ {{- end }}
+ {{- if .Values.worker.readinessProbe.enabled }}
+ readinessProbe:
+ exec:
+ command:
+ - sh
+ - /root/checkpoint.sh
+ - worker-server
+ initialDelaySeconds: {{ .Values.worker.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.worker.readinessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.worker.readinessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.worker.readinessProbe.successThreshold }}
+ failureThreshold: {{ .Values.worker.readinessProbe.failureThreshold }}
+ {{- end }}
+ volumeMounts:
+ - mountPath: {{ include "dolphinscheduler.worker.base.dir" . | quote }}
+ name: {{ include "dolphinscheduler.fullname" . }}-worker-data
+ - mountPath: "/opt/dolphinscheduler/logs"
+ name: {{ include "dolphinscheduler.fullname" . }}-worker-logs
+ - mountPath: "/opt/dolphinscheduler/conf/env/dolphinscheduler_env.sh"
+ subPath: "dolphinscheduler_env.sh"
+ name: {{ include "dolphinscheduler.fullname" . }}-worker-configmap
+ volumes:
+ - name: {{ include "dolphinscheduler.fullname" . }}-worker-data
+ {{- if .Values.worker.persistentVolumeClaim.dataPersistentVolume.enabled }}
+ persistentVolumeClaim:
+ claimName: {{ include "dolphinscheduler.fullname" . }}-worker-data
+ {{- else }}
+ emptyDir: {}
+ {{- end }}
+ - name: {{ include "dolphinscheduler.fullname" . }}-worker-logs
+ {{- if .Values.worker.persistentVolumeClaim.logsPersistentVolume.enabled }}
+ persistentVolumeClaim:
+ claimName: {{ include "dolphinscheduler.fullname" . }}-worker-logs
+ {{- else }}
+ emptyDir: {}
+ {{- end }}
+ - name: {{ include "dolphinscheduler.fullname" . }}-worker-configmap
+ configMap:
+ defaultMode: 0777
+ name: {{ include "dolphinscheduler.fullname" . }}-worker
+ items:
+ - key: dolphinscheduler_env.sh
+ path: dolphinscheduler_env.sh
+ {{- if .Values.worker.persistentVolumeClaim.enabled }}
+ volumeClaimTemplates:
+ {{- if .Values.worker.persistentVolumeClaim.dataPersistentVolume.enabled }}
+ - metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-worker-data
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-worker-data
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ spec:
+ accessModes:
+ {{- range .Values.worker.persistentVolumeClaim.dataPersistentVolume.accessModes }}
+ - {{ . | quote }}
+ {{- end }}
+ storageClassName: {{ .Values.worker.persistentVolumeClaim.dataPersistentVolume.storageClassName | quote }}
+ resources:
+ requests:
+ storage: {{ .Values.worker.persistentVolumeClaim.dataPersistentVolume.storage | quote }}
+ {{- end }}
+ {{- if .Values.worker.persistentVolumeClaim.logsPersistentVolume.enabled }}
+ - metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-worker-logs
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-worker-logs
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ spec:
+ accessModes:
+ {{- range .Values.worker.persistentVolumeClaim.logsPersistentVolume.accessModes }}
+ - {{ . | quote }}
+ {{- end }}
+ storageClassName: {{ .Values.worker.persistentVolumeClaim.logsPersistentVolume.storageClassName | quote }}
+ resources:
+ requests:
+ storage: {{ .Values.worker.persistentVolumeClaim.logsPersistentVolume.storage | quote }}
+ {{- end }}
+ {{- end }}
diff --git a/charts/dolphinscheduler/templates/svc-dolphinscheduler-api.yaml b/charts/dolphinscheduler/templates/svc-dolphinscheduler-api.yaml
new file mode 100644
index 0000000..4d07ade
--- /dev/null
+++ b/charts/dolphinscheduler/templates/svc-dolphinscheduler-api.yaml
@@ -0,0 +1,35 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-api
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-api
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ ports:
+ - port: 12345
+ targetPort: tcp-port
+ protocol: TCP
+ name: tcp-port
+ selector:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-api
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: api
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/svc-dolphinscheduler-frontend.yaml b/charts/dolphinscheduler/templates/svc-dolphinscheduler-frontend.yaml
new file mode 100644
index 0000000..60d0d6e
--- /dev/null
+++ b/charts/dolphinscheduler/templates/svc-dolphinscheduler-frontend.yaml
@@ -0,0 +1,35 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ ports:
+ - port: 8888
+ targetPort: tcp-port
+ protocol: TCP
+ name: tcp-port
+ selector:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-frontend
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: frontend
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/svc-dolphinscheduler-master-headless.yaml b/charts/dolphinscheduler/templates/svc-dolphinscheduler-master-headless.yaml
new file mode 100644
index 0000000..7aaf0b4
--- /dev/null
+++ b/charts/dolphinscheduler/templates/svc-dolphinscheduler-master-headless.yaml
@@ -0,0 +1,36 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-master-headless
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-master-headless
+ app.kubernetes.io/instance: {{ .Release.Name }}-master-headless
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ clusterIP: "None"
+ ports:
+ - port: 8888
+ targetPort: tcp-port
+ protocol: TCP
+ name: unused-tcp-port
+ selector:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-master
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: master
\ No newline at end of file
diff --git a/charts/dolphinscheduler/templates/svc-dolphinscheduler-worker-headless.yaml b/charts/dolphinscheduler/templates/svc-dolphinscheduler-worker-headless.yaml
new file mode 100644
index 0000000..3e92a34
--- /dev/null
+++ b/charts/dolphinscheduler/templates/svc-dolphinscheduler-worker-headless.yaml
@@ -0,0 +1,36 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "dolphinscheduler.fullname" . }}-worker-headless
+ labels:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-worker-headless
+ app.kubernetes.io/instance: {{ .Release.Name }}-worker-headless
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ clusterIP: "None"
+ ports:
+ - port: 50051
+ targetPort: logs-port
+ protocol: TCP
+ name: logs-port
+ selector:
+ app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-worker
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: worker
\ No newline at end of file
diff --git a/charts/dolphinscheduler/values.yaml b/charts/dolphinscheduler/values.yaml
new file mode 100644
index 0000000..962a031
--- /dev/null
+++ b/charts/dolphinscheduler/values.yaml
@@ -0,0 +1,355 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Default values for dolphinscheduler-chart.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+nameOverride: ""
+fullnameOverride: ""
+
+timezone: "Asia/Shanghai"
+
+image:
+ registry: "docker.io"
+ repository: "dolphinscheduler"
+ tag: "1.2.1"
+ pullPolicy: "IfNotPresent"
+
+imagePullSecrets: []
+
+# If not exists external postgresql, by default, Dolphinscheduler's database will use it.
+postgresql:
+ enabled: true
+ postgresqlUsername: "root"
+ postgresqlPassword: "root"
+ postgresqlDatabase: "dolphinscheduler"
+ persistence:
+ enabled: false
+ size: "20Gi"
+ storageClass: "-"
+
+# If exists external postgresql, and set postgresql.enable value to false.
+# If postgresql.enable is false, Dolphinscheduler's database will use it.
+externalDatabase:
+ host: "localhost"
+ port: "5432"
+ username: "root"
+ password: "root"
+ database: "dolphinscheduler"
+
+# If not exists external zookeeper, by default, Dolphinscheduler's zookeeper will use it.
+zookeeper:
+ enabled: true
+ taskQueue: "zookeeper"
+ persistence:
+ enabled: false
+ size: "20Gi"
+ storageClass: "-"
+
+# If exists external zookeeper, and set zookeeper.enable value to false.
+# If zookeeper.enable is false, Dolphinscheduler's zookeeper will use it.
+externalZookeeper:
+ taskQueue: "zookeeper"
+ zookeeperQuorum: "127.0.0.1:2181"
+
+master:
+ podManagementPolicy: "Parallel"
+ replicas: "3"
+ # NodeSelector is a selector which must be true for the pod to fit on a node.
+ # Selector which must match a node's labels for the pod to be scheduled on that node.
+ # More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ nodeSelector: {}
+ # Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission,
+ # effectively unioning the set of nodes tolerated by the pod and the RuntimeClass.
+ tolerations: []
+ # Affinity is a group of affinity scheduling rules.
+ # If specified, the pod's scheduling constraints.
+ # More info: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#affinity-v1-core
+ affinity: {}
+ ## Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated.
+ ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ configmap:
+ MASTER_EXEC_THREADS: "100"
+ MASTER_EXEC_TASK_NUM: "20"
+ MASTER_HEARTBEAT_INTERVAL: "10"
+ MASTER_TASK_COMMIT_RETRYTIMES: "5"
+ MASTER_TASK_COMMIT_INTERVAL: "1000"
+ MASTER_MAX_CPULOAD_AVG: "100"
+ MASTER_RESERVED_MEMORY: "0.1"
+ livenessProbe:
+ enabled: true
+ initialDelaySeconds: "30"
+ periodSeconds: "30"
+ timeoutSeconds: "5"
+ failureThreshold: "3"
+ successThreshold: "1"
+ ## Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated.
+ ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ readinessProbe:
+ enabled: true
+ initialDelaySeconds: "30"
+ periodSeconds: "30"
+ timeoutSeconds: "5"
+ failureThreshold: "3"
+ successThreshold: "1"
+ ## volumeClaimTemplates is a list of claims that pods are allowed to reference.
+ ## The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod.
+ ## Every claim in this list must have at least one matching (by name) volumeMount in one container in the template.
+ ## A claim in this list takes precedence over any volumes in the template, with the same name.
+ persistentVolumeClaim:
+ enabled: false
+ accessModes:
+ - "ReadWriteOnce"
+ storageClassName: "-"
+ storage: "20Gi"
+
+worker:
+ podManagementPolicy: "Parallel"
+ replicas: "3"
+ # NodeSelector is a selector which must be true for the pod to fit on a node.
+ # Selector which must match a node's labels for the pod to be scheduled on that node.
+ # More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ nodeSelector: {}
+ # Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission,
+ # effectively unioning the set of nodes tolerated by the pod and the RuntimeClass.
+ tolerations: []
+ # Affinity is a group of affinity scheduling rules.
+ # If specified, the pod's scheduling constraints.
+ # More info: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#affinity-v1-core
+ affinity: {}
+ ## Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated.
+ ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ livenessProbe:
+ enabled: true
+ initialDelaySeconds: "30"
+ periodSeconds: "30"
+ timeoutSeconds: "5"
+ failureThreshold: "3"
+ successThreshold: "1"
+ ## Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated.
+ ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ readinessProbe:
+ enabled: true
+ initialDelaySeconds: "30"
+ periodSeconds: "30"
+ timeoutSeconds: "5"
+ failureThreshold: "3"
+ successThreshold: "1"
+ configmap:
+ WORKER_EXEC_THREADS: "100"
+ WORKER_HEARTBEAT_INTERVAL: "10"
+ WORKER_FETCH_TASK_NUM: "3"
+ WORKER_MAX_CPULOAD_AVG: "100"
+ WORKER_RESERVED_MEMORY: "0.1"
+ DOLPHINSCHEDULER_DATA_BASEDIR_PATH: "/tmp/dolphinscheduler"
+ DOLPHINSCHEDULER_ENV:
+ - "export HADOOP_HOME=/opt/soft/hadoop"
+ - "export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop"
+ - "export SPARK_HOME1=/opt/soft/spark1"
+ - "export SPARK_HOME2=/opt/soft/spark2"
+ - "export PYTHON_HOME=/opt/soft/python"
+ - "export JAVA_HOME=/opt/soft/java"
+ - "export HIVE_HOME=/opt/soft/hive"
+ - "export FLINK_HOME=/opt/soft/flink"
+ - "export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$PATH"
+ ## volumeClaimTemplates is a list of claims that pods are allowed to reference.
+ ## The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod.
+ ## Every claim in this list must have at least one matching (by name) volumeMount in one container in the template.
+ ## A claim in this list takes precedence over any volumes in the template, with the same name.
+ persistentVolumeClaim:
+ enabled: false
+ ## dolphinscheduler data volume
+ dataPersistentVolume:
+ enabled: false
+ accessModes:
+ - "ReadWriteOnce"
+ storageClassName: "-"
+ storage: "20Gi"
+ ## dolphinscheduler logs volume
+ logsPersistentVolume:
+ enabled: false
+ accessModes:
+ - "ReadWriteOnce"
+ storageClassName: "-"
+ storage: "20Gi"
+
+alert:
+ strategy:
+ type: "RollingUpdate"
+ rollingUpdate:
+ maxSurge: "25%"
+ maxUnavailable: "25%"
+ replicas: "1"
+ # NodeSelector is a selector which must be true for the pod to fit on a node.
+ # Selector which must match a node's labels for the pod to be scheduled on that node.
+ # More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ nodeSelector: {}
+ # Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission,
+ # effectively unioning the set of nodes tolerated by the pod and the RuntimeClass.
+ tolerations: []
+ # Affinity is a group of affinity scheduling rules.
+ # If specified, the pod's scheduling constraints.
+ # More info: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#affinity-v1-core
+ affinity: {}
+ ## Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated.
+ ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ configmap:
+ XLS_FILE_PATH: "/tmp/xls"
+ MAIL_SERVER_HOST: ""
+ MAIL_SERVER_PORT: ""
+ MAIL_SENDER: ""
+ MAIL_USER: ""
+ MAIL_PASSWD: ""
+ MAIL_SMTP_STARTTLS_ENABLE: false
+ MAIL_SMTP_SSL_ENABLE: false
+ MAIL_SMTP_SSL_TRUST: ""
+ ENTERPRISE_WECHAT_ENABLE: false
+ ENTERPRISE_WECHAT_CORP_ID: ""
+ ENTERPRISE_WECHAT_SECRET: ""
+ ENTERPRISE_WECHAT_AGENT_ID: ""
+ ENTERPRISE_WECHAT_USERS: ""
+ livenessProbe:
+ enabled: true
+ initialDelaySeconds: "30"
+ periodSeconds: "30"
+ timeoutSeconds: "5"
+ failureThreshold: "3"
+ successThreshold: "1"
+ ## Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated.
+ ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ readinessProbe:
+ enabled: true
+ initialDelaySeconds: "30"
+ periodSeconds: "30"
+ timeoutSeconds: "5"
+ failureThreshold: "3"
+ successThreshold: "1"
+ ## volumeClaimTemplates is a list of claims that pods are allowed to reference.
+ ## The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod.
+ ## Every claim in this list must have at least one matching (by name) volumeMount in one container in the template.
+ ## A claim in this list takes precedence over any volumes in the template, with the same name.
+ persistentVolumeClaim:
+ enabled: false
+ accessModes:
+ - "ReadWriteOnce"
+ storageClassName: "-"
+ storage: "20Gi"
+
+api:
+ strategy:
+ type: "RollingUpdate"
+ rollingUpdate:
+ maxSurge: "25%"
+ maxUnavailable: "25%"
+ replicas: "1"
+ # NodeSelector is a selector which must be true for the pod to fit on a node.
+ # Selector which must match a node's labels for the pod to be scheduled on that node.
+ # More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ nodeSelector: {}
+ # Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission,
+ # effectively unioning the set of nodes tolerated by the pod and the RuntimeClass.
+ tolerations: []
+ # Affinity is a group of affinity scheduling rules.
+ # If specified, the pod's scheduling constraints.
+ # More info: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#affinity-v1-core
+ affinity: {}
+ ## Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated.
+ ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ livenessProbe:
+ enabled: true
+ initialDelaySeconds: "30"
+ periodSeconds: "30"
+ timeoutSeconds: "5"
+ failureThreshold: "3"
+ successThreshold: "1"
+ ## Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated.
+ ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ readinessProbe:
+ enabled: true
+ initialDelaySeconds: "30"
+ periodSeconds: "30"
+ timeoutSeconds: "5"
+ failureThreshold: "3"
+ successThreshold: "1"
+ ## volumeClaimTemplates is a list of claims that pods are allowed to reference.
+ ## The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod.
+ ## Every claim in this list must have at least one matching (by name) volumeMount in one container in the template.
+ ## A claim in this list takes precedence over any volumes in the template, with the same name.
+ persistentVolumeClaim:
+ enabled: false
+ accessModes:
+ - "ReadWriteOnce"
+ storageClassName: "-"
+ storage: "20Gi"
+
+frontend:
+ strategy:
+ type: "RollingUpdate"
+ rollingUpdate:
+ maxSurge: "25%"
+ maxUnavailable: "25%"
+ replicas: "1"
+ # NodeSelector is a selector which must be true for the pod to fit on a node.
+ # Selector which must match a node's labels for the pod to be scheduled on that node.
+ # More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ nodeSelector: {}
+ # Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission,
+ # effectively unioning the set of nodes tolerated by the pod and the RuntimeClass.
+ tolerations: []
+ # Affinity is a group of affinity scheduling rules.
+ # If specified, the pod's scheduling constraints.
+ # More info: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#affinity-v1-core
+ affinity: {}
+ ## Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated.
+ ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ livenessProbe:
+ enabled: true
+ initialDelaySeconds: "30"
+ periodSeconds: "30"
+ timeoutSeconds: "5"
+ failureThreshold: "3"
+ successThreshold: "1"
+ ## Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated.
+ ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ readinessProbe:
+ enabled: true
+ initialDelaySeconds: "30"
+ periodSeconds: "30"
+ timeoutSeconds: "5"
+ failureThreshold: "3"
+ successThreshold: "1"
+ ## volumeClaimTemplates is a list of claims that pods are allowed to reference.
+ ## The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod.
+ ## Every claim in this list must have at least one matching (by name) volumeMount in one container in the template.
+ ## A claim in this list takes precedence over any volumes in the template, with the same name.
+ persistentVolumeClaim:
+ enabled: false
+ accessModes:
+ - "ReadWriteOnce"
+ storageClassName: "-"
+ storage: "20Gi"
+
+ingress:
+ enabled: false
+ host: "dolphinscheduler.org"
+ path: "/"
+ tls:
+ enabled: false
+ hosts:
+ - "dolphinscheduler.org"
+ secretName: "dolphinscheduler-tls"
\ No newline at end of file