You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@inlong.apache.org by do...@apache.org on 2022/04/10 07:47:03 UTC
[incubator-inlong] branch master updated: [INLONG-3552][InLong] Add more configuration items (#3583)
This is an automated email from the ASF dual-hosted git repository.
dockerzhang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-inlong.git
The following commit(s) were added to refs/heads/master by this push:
new 6f7929667 [INLONG-3552][InLong] Add more configuration items (#3583)
6f7929667 is described below
commit 6f7929667fe0a4b859abe844bbd2f85e7a5fe67c
Author: Yuanhao Ji <ji...@apache.org>
AuthorDate: Sun Apr 10 15:46:57 2022 +0800
[INLONG-3552][InLong] Add more configuration items (#3583)
---
docker/kubernetes/README.md | 41 +++
docker/kubernetes/templates/_helpers.tpl | 23 +-
docker/kubernetes/templates/agent-statefulset.yaml | 88 +++---
docker/kubernetes/templates/audit-statefulset.yaml | 40 ++-
.../templates/dashboard-statefulset.yaml | 56 ++--
.../templates/dataproxy-statefulset.yaml | 56 ++--
.../kubernetes/templates/manager-statefulset.yaml | 26 +-
docker/kubernetes/templates/mysql-service.yaml | 2 +-
docker/kubernetes/templates/mysql-statefulset.yaml | 24 +-
.../templates/tubemq-broker-configmap.yaml | 2 +-
.../templates/tubemq-broker-service.yaml | 18 +-
.../templates/tubemq-broker-statefulset.yaml | 66 +++--
.../templates/tubemq-manager-statefulset.yaml | 38 ++-
.../templates/tubemq-master-service.yaml | 18 +-
.../templates/tubemq-master-statefulset.yaml | 66 +++--
docker/kubernetes/templates/zookeeper-service.yaml | 2 +-
.../templates/zookeeper-statefulset.yaml | 30 +-
docker/kubernetes/values.yaml | 319 +++++++++++++++++++--
18 files changed, 707 insertions(+), 208 deletions(-)
diff --git a/docker/kubernetes/README.md b/docker/kubernetes/README.md
index fb3c30b3d..d305e4734 100644
--- a/docker/kubernetes/README.md
+++ b/docker/kubernetes/README.md
@@ -17,7 +17,48 @@ helm upgrade inlong --install -n inlong ./
#### Configuration
+The configuration file is [values.yaml](values.yaml), and the following tables lists the configurable parameters of InLong and their default values.
+| Parameter | Default | Description |
+|:--------------------------------------------------------------------------------:|:----------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------:|
+| `timezone` | `Asia/Shanghai` | World time and date for cities in all time zones |
+| `images.pullPolicy` | `IfNotPresent` | Image pull policy. One of `Always`, `Never`, `IfNotPresent` |
+| `images.<component>.repository` | | Docker image repository for the component |
+| `images.<component>.tag` | `latest` | Docker image tag for the component |
+| `<component>.component` | | Component name |
+| `<component>.replicaCount` | `1` | Replicas is the desired number of replicas of a given Template |
+| `<component>.podManagementPolicy` | `OrderedReady` | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down |
+| `<component>.annotations` | `{}` | The `annotations` field can be used to attach arbitrary non-identifying metadata to objects |
+| `<component>.tolerations` | `[]` | Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints |
+| `<component>.nodeSelector` | `{}` | You can add the `nodeSelector` field to your Pod specification and specify the node labels you want the target node to have |
+| `<component>.affinity` | `{}` | Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels |
+| `<component>.terminationGracePeriodSeconds` | `30` | Optional duration in seconds the pod needs to terminate gracefully |
+| `<component>.resources` | `{}` | Optionally specify how much of each resource a container needs |
+| `<component>.port(s)` | | The port(s) for each component service |
+| `<component>.env` | `{}` | Environment variables for each component container |
+| <code>\<component\>.probe.\<liveness|readiness\>.enabled</code> | `true` | Turn on and off liveness or readiness probe |
+| <code>\<component\>.probe.\<liveness|readiness\>.failureThreshold</code> | `10` | Minimum consecutive successes for the probe |
+| <code>\<component\>.probe.\<liveness|readiness\>.initialDelaySeconds</code> | `10` | Delay before the probe is initiated |
+| <code>\<component\>.probe.\<liveness|readiness\>.periodSeconds</code> | `30` | How often to perform the probe |
+| `<component>.volumes.name` | | Volume name |
+| `<component>.volumes.size` | `10Gi` | Volume size |
+| `<component>.service.annotations` | `{}` | The `annotations` field may need to be set when service.type is `LoadBalancer` |
+| `<component>.service.type` | `ClusterIP` | The `type` field determines how the service is exposed. Valid options are `ClusterIP`, `NodePort`, `LoadBalancer` and `ExternalName` |
+| `<component>.service.clusterIP` | `nil` | ClusterIP is the IP address of the service and is usually assigned randomly by the master |
+| `<component>.service.nodePort` | `nil` | NodePort is the port on each node on which this service is exposed when service type is `NodePort` |
+| `<component>.service.loadBalancerIP` | `nil` | LoadBalancer will get created with the IP specified in this field when service type is `LoadBalancer` |
+| `<component>.service.externalName` | `nil` | ExternalName is the external reference that kubedns or equivalent will return as a CNAME record for this service, requires service type to be `ExternalName` |
+| `<component>.service.externalIPs` | `[]` | ExternalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service |
+| `external.mysql.enabled` | `false` | If not exists external MySQL, InLong will use the internal MySQL by default |
+| `external.mysql.hostname` | `localhost` | External MySQL hostname |
+| `external.mysql.port` | `3306` | External MySQL port |
+| `external.mysql.username` | `root` | External MySQL username |
+| `external.mysql.password` | `password` | External MySQL password |
+| `external.pulsar.enabled` | `false` | If not exists external Pulsar, InLong will use the internal TubeMQ by default |
+| `external.pulsar.serviceUrl` | `localhost:6650` | External Pulsar service URL |
+| `external.pulsar.adminUrl` | `localhost:8080` | External Pulsar admin URL |
+
+> The components include `agent`, `audit`, `dashboard`, `dataproxy`, `manager`, `tubemq-manager`, `tubemq-master`, `tubemq-broker`, `zookeeper` and `mysql`.
#### Uninstall
diff --git a/docker/kubernetes/templates/_helpers.tpl b/docker/kubernetes/templates/_helpers.tpl
index 77dd442d6..765c8ba9b 100644
--- a/docker/kubernetes/templates/_helpers.tpl
+++ b/docker/kubernetes/templates/_helpers.tpl
@@ -77,35 +77,42 @@ release: {{ .Release.Name }}
Define the audit hostname
*/}}
{{- define "inlong.audit.hostname" -}}
-${HOSTNAME}.{{ template "inlong.fullname" . }}-{{ .Values.audit.component }}.{{ .Release.Namespace }}.svc.cluster.local
+{{ template "inlong.fullname" . }}-{{ .Values.audit.component }}.{{ .Release.Namespace }}.svc.cluster.local
+{{- end -}}
+
+{{/*
+Define the dashboard hostname
+*/}}
+{{- define "inlong.dashboard.hostname" -}}
+{{ template "inlong.fullname" . }}-{{ .Values.dashboard.component }}.{{ .Release.Namespace }}.svc.cluster.local
{{- end -}}
{{/*
Define the manager hostname
*/}}
{{- define "inlong.manager.hostname" -}}
-${HOSTNAME}.{{ template "inlong.fullname" . }}-{{ .Values.manager.component }}.{{ .Release.Namespace }}.svc.cluster.local
+{{ template "inlong.fullname" . }}-{{ .Values.manager.component }}.{{ .Release.Namespace }}.svc.cluster.local
{{- end -}}
{{/*
Define the dataproxy hostname
*/}}
{{- define "inlong.dataproxy.hostname" -}}
-${HOSTNAME}.{{ template "inlong.fullname" . }}-{{ .Values.dataproxy.component }}.{{ .Release.Namespace }}.svc.cluster.local
+{{ template "inlong.fullname" . }}-{{ .Values.dataproxy.component }}.{{ .Release.Namespace }}.svc.cluster.local
{{- end -}}
{{/*
Define the tubemq manager hostname
*/}}
{{- define "inlong.tubemqManager.hostname" -}}
-${HOSTNAME}.{{ template "inlong.fullname" . }}-{{ .Values.tubemqManager.component }}.{{ .Release.Namespace }}.svc.cluster.local
+{{ template "inlong.fullname" . }}-{{ .Values.tubemqManager.component }}.{{ .Release.Namespace }}.svc.cluster.local
{{- end -}}
{{/*
Define the tubemq master hostname
*/}}
{{- define "inlong.tubemqMaster.hostname" -}}
-${HOSTNAME}.{{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }}.{{ .Release.Namespace }}.svc.cluster.local
+{{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }}.{{ .Release.Namespace }}.svc.cluster.local
{{- end -}}
{{/*
@@ -115,7 +122,7 @@ Define the mysql hostname
{{- if .Values.external.mysql.enabled -}}
{{ .Values.external.mysql.hostname }}
{{- else -}}
-${HOSTNAME}.{{ template "inlong.fullname" . }}-{{ .Values.mysql.component }}.{{ .Release.Namespace }}.svc.cluster.local
+{{ template "inlong.fullname" . }}-{{ .Values.mysql.component }}.{{ .Release.Namespace }}.svc.cluster.local
{{- end -}}
{{- end -}}
@@ -126,7 +133,7 @@ Define the mysql port
{{- if .Values.external.mysql.enabled -}}
{{ .Values.external.mysql.port }}
{{- else -}}
-{{ .Values.mysql.ports.server }}
+{{ .Values.mysql.port }}
{{- end -}}
{{- end -}}
@@ -145,7 +152,7 @@ Define the mysql username
Define the zookeeper hostname
*/}}
{{- define "inlong.zookeeper.hostname" -}}
-${HOSTNAME}.{{ template "inlong.fullname" . }}-{{ .Values.zookeeper.component }}.{{ .Release.Namespace }}.svc.cluster.local
+{{ template "inlong.fullname" . }}-{{ .Values.zookeeper.component }}.{{ .Release.Namespace }}.svc.cluster.local
{{- end -}}
{{/*
diff --git a/docker/kubernetes/templates/agent-statefulset.yaml b/docker/kubernetes/templates/agent-statefulset.yaml
index c5cc77f6e..3db43e457 100644
--- a/docker/kubernetes/templates/agent-statefulset.yaml
+++ b/docker/kubernetes/templates/agent-statefulset.yaml
@@ -25,61 +25,81 @@ metadata:
component: {{ .Values.agent.component }}
spec:
serviceName: {{ template "inlong.fullname" . }}-{{ .Values.agent.component }}
- replicas: {{ .Values.agent.replicaCount }}
+ replicas: {{ .Values.agent.replicas }}
selector:
matchLabels:
{{- include "inlong.matchLabels" . | nindent 6 }}
component: {{ .Values.agent.component }}
+ updateStrategy:
+ type: {{ .Values.agent.updateStrategy.type | quote }}
+ podManagementPolicy: {{ .Values.agent.podManagementPolicy | quote }}
template:
metadata:
labels:
{{- include "inlong.template.labels" . | nindent 8 }}
component: {{ .Values.agent.component }}
+ {{- if .Values.agent.annotations }}
+ annotations:
+ {{- toYaml .Values.agent.annotations | nindent 8 }}
+ {{- end }}
spec:
+ {{- if .Values.agent.tolerations }}
+ tolerations:
+ {{- toYaml .Values.agent.tolerations | nindent 8 }}
+ {{- end }}
+ {{- if .Values.agent.nodeSelector }}
+ nodeSelector:
+ {{- toYaml .Values.agent.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.agent.affinity }}
+ affinity:
+ {{- toYaml .Values.agent.affinity | nindent 8 }}
+ {{- end }}
+ terminationGracePeriodSeconds: {{ .Values.agent.terminationGracePeriodSeconds }}
initContainers:
- - name: wait-{{ .Values.dashboard.component }}-ready
- image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
- imagePullPolicy: {{ .Values.images.pullPolicy }}
- command: [ "/bin/sh", "-c" ]
- args:
- - |
- count={{ .Values.dashboard.replicaCount }}
- for i in $(seq 0 $(expr $count - 1))
- do
- replica="{{ template "inlong.fullname" . }}-{{ .Values.dashboard.component }}-$i"
- host="$replica.{{ template "inlong.fullname" . }}-{{ .Values.dashboard.component }}.{{ .Release.Namespace }}.svc.cluster.local"
- port={{ .Values.dashboard.port }}
- until [ $(nc -z -w 5 $host $port; echo $?) -eq 0 ]
+ - name: wait-{{ .Values.dashboard.component }}-ready
+ image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
+ imagePullPolicy: {{ .Values.images.pullPolicy }}
+ command: [ "/bin/sh", "-c" ]
+ args:
+ - |
+ count={{ .Values.dashboard.replicas }}
+ for i in $(seq 0 $(expr $count - 1))
do
- echo "waiting for $replica to be ready"
- sleep 3
+ replica="{{ template "inlong.fullname" . }}-{{ .Values.dashboard.component }}-$i"
+ host="$replica.{{ template "inlong.dashboard.hostname" . }}"
+ port={{ .Values.dashboard.port }}
+ until nc -z $host $port 2>/dev/null
+ do
+ echo "waiting for $replica to be ready"
+ sleep 3
+ done
done
- done
- - name: wait-{{ .Values.dataproxy.component }}-ready
- image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
- imagePullPolicy: {{ .Values.images.pullPolicy }}
- command: [ "/bin/sh", "-c" ]
- args:
- - |
- count={{ .Values.dataproxy.replicaCount }}
- for i in $(seq 0 $(expr $count - 1))
- do
- replica="{{ template "inlong.fullname" . }}-{{ .Values.dataproxy.component }}-$i"
- host="$replica.{{ template "inlong.fullname" . }}-{{ .Values.dataproxy.component }}.{{ .Release.Namespace }}.svc.cluster.local"
- port={{ .Values.dataproxy.port }}
- until [ $(nc -z -w 5 $host $port; echo $?) -eq 0 ]
+ - name: wait-{{ .Values.dataproxy.component }}-ready
+ image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
+ imagePullPolicy: {{ .Values.images.pullPolicy }}
+ command: [ "/bin/sh", "-c" ]
+ args:
+ - |
+ count={{ .Values.dataproxy.replicas }}
+ for i in $(seq 0 $(expr $count - 1))
do
- echo "waiting for $replica to be ready"
- sleep 3
+ replica="{{ template "inlong.fullname" . }}-{{ .Values.dataproxy.component }}-$i"
+ host="$replica.{{ template "inlong.dataproxy.hostname" . }}"
+ port={{ .Values.dataproxy.port }}
+ until nc -z $host $port 2>/dev/null
+ do
+ echo "waiting for $replica to be ready"
+ sleep 3
+ done
done
- done
containers:
- name: {{ template "inlong.fullname" . }}-{{ .Values.agent.component }}
image: {{ .Values.images.agent.repository }}:{{ .Values.images.agent.tag }}
imagePullPolicy: {{ .Values.images.pullPolicy }}
{{- if .Values.agent.resources }}
resources:
-{{ toYaml .Values.agent.resources | indent 12 }}
+ {{- toYaml .Values.agent.resources | nindent 12 }}
{{- end }}
env:
- name: MANAGER_OPENAPI_IP
diff --git a/docker/kubernetes/templates/audit-statefulset.yaml b/docker/kubernetes/templates/audit-statefulset.yaml
index ba85cdbfa..e8a9f970a 100644
--- a/docker/kubernetes/templates/audit-statefulset.yaml
+++ b/docker/kubernetes/templates/audit-statefulset.yaml
@@ -25,17 +25,37 @@ metadata:
component: {{ .Values.audit.component }}
spec:
serviceName: {{ template "inlong.fullname" . }}-{{ .Values.audit.component }}
- replicas: {{ .Values.audit.replicaCount }}
+ replicas: {{ .Values.audit.replicas }}
selector:
matchLabels:
{{- include "inlong.matchLabels" . | nindent 6 }}
component: {{ .Values.audit.component }}
+ updateStrategy:
+ type: {{ .Values.audit.updateStrategy.type | quote }}
+ podManagementPolicy: {{ .Values.audit.podManagementPolicy | quote }}
template:
metadata:
labels:
{{- include "inlong.template.labels" . | nindent 8 }}
component: {{ .Values.audit.component }}
+ {{- if .Values.audit.annotations }}
+ annotations:
+ {{- toYaml .Values.audit.annotations | nindent 8 }}
+ {{- end }}
spec:
+ {{- if .Values.audit.tolerations }}
+ tolerations:
+ {{- toYaml .Values.audit.tolerations | nindent 8 }}
+ {{- end }}
+ {{- if .Values.audit.nodeSelector }}
+ nodeSelector:
+ {{- toYaml .Values.audit.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.audit.affinity }}
+ affinity:
+ {{- toYaml .Values.audit.affinity | nindent 8 }}
+ {{- end }}
+ terminationGracePeriodSeconds: {{ .Values.audit.terminationGracePeriodSeconds }}
initContainers:
- name: wait-{{ .Values.mysql.component }}-ready
image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
@@ -43,13 +63,13 @@ spec:
command: [ "/bin/sh", "-c" ]
args:
- |
- count={{ .Values.mysql.replicaCount }}
+ count={{ .Values.mysql.replicas }}
for i in $(seq 0 $(expr $count - 1))
do
replica="{{ template "inlong.fullname" . }}-{{ .Values.mysql.component }}-$i"
- host="$replica.{{ template "inlong.fullname" . }}-{{ .Values.mysql.component }}.{{ .Release.Namespace }}.svc.cluster.local"
- port={{ .Values.mysql.ports.server }}
- until [ $(nc -z -w 5 $host $port; echo $?) -eq 0 ]
+ host="$replica.{{ template "inlong.mysql.hostname" . }}"
+ port={{ .Values.mysql.port }}
+ until nc -z $host $port 2>/dev/null
do
echo "waiting for $replica to be ready"
sleep 3
@@ -61,13 +81,13 @@ spec:
command: [ "/bin/sh", "-c" ]
args:
- |
- count={{ .Values.manager.replicaCount }}
+ count={{ .Values.manager.replicas }}
for i in $(seq 0 $(expr $count - 1))
do
replica="{{ template "inlong.fullname" . }}-{{ .Values.manager.component }}-$i"
- host="$replica.{{ template "inlong.fullname" . }}-{{ .Values.manager.component }}.{{ .Release.Namespace }}.svc.cluster.local"
+ host="$replica.{{ template "inlong.manager.hostname" . }}"
port={{ .Values.manager.port }}
- until [ $(nc -z -w 5 $host $port; echo $?) -eq 0 ]
+ until nc -z $host $port 2>/dev/null
do
echo "waiting for $replica to be ready"
sleep 3
@@ -79,11 +99,11 @@ spec:
imagePullPolicy: {{ .Values.images.pullPolicy }}
{{- if .Values.audit.resources }}
resources:
-{{ toYaml .Values.audit.resources | indent 12 }}
+ {{- toYaml .Values.audit.resources | nindent 12 }}
{{- end }}
env:
- name: JDBC_URL
- value: "jdbc:mysql://{{ template "inlong.mysql.hostname" . }}:{{ .Values.mysql.ports.server }}/apache_inlong_audit?useSSL=false&allowPublicKeyRetrieval=true&characterEncoding=UTF-8&nullCatalogMeansCurrent=true&serverTimezone=GMT%2b8"
+ value: "jdbc:mysql://{{ template "inlong.mysql.hostname" . }}:{{ .Values.mysql.port }}/apache_inlong_audit?useSSL=false&allowPublicKeyRetrieval=true&characterEncoding=UTF-8&nullCatalogMeansCurrent=true&serverTimezone=GMT%2b8"
- name: USERNAME
value: {{ include "inlong.mysql.username" . | quote }}
- name: PASSWORD
diff --git a/docker/kubernetes/templates/dashboard-statefulset.yaml b/docker/kubernetes/templates/dashboard-statefulset.yaml
index d20bc4f7b..4404abc7f 100644
--- a/docker/kubernetes/templates/dashboard-statefulset.yaml
+++ b/docker/kubernetes/templates/dashboard-statefulset.yaml
@@ -25,43 +25,63 @@ metadata:
component: {{ .Values.dashboard.component }}
spec:
serviceName: {{ template "inlong.fullname" . }}-{{ .Values.dashboard.component }}
- replicas: {{ .Values.tubemqManager.replicaCount }}
+ replicas: {{ .Values.tubemqManager.replicas }}
selector:
matchLabels:
{{- include "inlong.matchLabels" . | nindent 6 }}
component: {{ .Values.dashboard.component }}
+ updateStrategy:
+ type: {{ .Values.dashboard.updateStrategy.type | quote }}
+ podManagementPolicy: {{ .Values.dashboard.podManagementPolicy | quote }}
template:
metadata:
labels:
{{- include "inlong.template.labels" . | nindent 8 }}
component: {{ .Values.dashboard.component }}
+ {{- if .Values.dashboard.annotations }}
+ annotations:
+ {{- toYaml .Values.dashboard.annotations | nindent 8 }}
+ {{- end }}
spec:
+ {{- if .Values.dashboard.tolerations }}
+ tolerations:
+ {{- toYaml .Values.dashboard.tolerations | nindent 8 }}
+ {{- end }}
+ {{- if .Values.dashboard.nodeSelector }}
+ nodeSelector:
+ {{- toYaml .Values.dashboard.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.dashboard.affinity }}
+ affinity:
+ {{- toYaml .Values.dashboard.affinity | nindent 8 }}
+ {{- end }}
+ terminationGracePeriodSeconds: {{ .Values.dashboard.terminationGracePeriodSeconds }}
initContainers:
- - name: wait-{{ .Values.manager.component }}-ready
- image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
- imagePullPolicy: {{ .Values.images.pullPolicy }}
- command: [ "/bin/sh", "-c" ]
- args:
- - |
- count={{ .Values.manager.replicaCount }}
- for i in $(seq 0 $(expr $count - 1))
- do
- replica="{{ template "inlong.fullname" . }}-{{ .Values.manager.component }}-$i"
- host="$replica.{{ template "inlong.fullname" . }}-{{ .Values.manager.component }}.{{ .Release.Namespace }}.svc.cluster.local"
- port={{ .Values.manager.port }}
- until [ $(nc -z -w 5 $host $port; echo $?) -eq 0 ]
+ - name: wait-{{ .Values.manager.component }}-ready
+ image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
+ imagePullPolicy: {{ .Values.images.pullPolicy }}
+ command: [ "/bin/sh", "-c" ]
+ args:
+ - |
+ count={{ .Values.manager.replicas }}
+ for i in $(seq 0 $(expr $count - 1))
do
- echo "waiting for $replica to be ready"
- sleep 3
+ replica="{{ template "inlong.fullname" . }}-{{ .Values.manager.component }}-$i"
+ host="$replica.{{ template "inlong.manager.hostname" . }}"
+ port={{ .Values.manager.port }}
+ until nc -z $host $port 2>/dev/null
+ do
+ echo "waiting for $replica to be ready"
+ sleep 3
+ done
done
- done
containers:
- name: {{ template "inlong.fullname" . }}-{{ .Values.dashboard.component }}
image: {{ .Values.images.dashboard.repository }}:{{ .Values.images.dashboard.tag }}
imagePullPolicy: {{ .Values.images.pullPolicy }}
{{- if .Values.dashboard.resources }}
resources:
-{{ toYaml .Values.dashboard.resources | indent 12 }}
+ {{- toYaml .Values.dashboard.resources | nindent 12 }}
{{- end }}
env:
- name: MANAGER_API_ADDRESS
diff --git a/docker/kubernetes/templates/dataproxy-statefulset.yaml b/docker/kubernetes/templates/dataproxy-statefulset.yaml
index 92368e538..d4c1c1b02 100644
--- a/docker/kubernetes/templates/dataproxy-statefulset.yaml
+++ b/docker/kubernetes/templates/dataproxy-statefulset.yaml
@@ -25,43 +25,63 @@ metadata:
component: {{ .Values.dataproxy.component }}
spec:
serviceName: {{ template "inlong.fullname" . }}-{{ .Values.dataproxy.component }}
- replicas: {{ .Values.dataproxy.replicaCount }}
+ replicas: {{ .Values.dataproxy.replicas }}
selector:
matchLabels:
{{- include "inlong.matchLabels" . | nindent 6 }}
component: {{ .Values.dataproxy.component }}
+ updateStrategy:
+ type: {{ .Values.dataproxy.updateStrategy.type | quote }}
+ podManagementPolicy: {{ .Values.dataproxy.podManagementPolicy | quote }}
template:
metadata:
labels:
{{- include "inlong.template.labels" . | nindent 8 }}
component: {{ .Values.dataproxy.component }}
+ {{- if .Values.dataproxy.annotations }}
+ annotations:
+ {{- toYaml .Values.dataproxy.annotations | nindent 8 }}
+ {{- end }}
spec:
+ {{- if .Values.dataproxy.tolerations }}
+ tolerations:
+ {{- toYaml .Values.dataproxy.tolerations | nindent 8 }}
+ {{- end }}
+ {{- if .Values.dataproxy.nodeSelector }}
+ nodeSelector:
+ {{- toYaml .Values.dataproxy.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.dataproxy.affinity }}
+ affinity:
+ {{- toYaml .Values.dataproxy.affinity | nindent 8 }}
+ {{- end }}
+ terminationGracePeriodSeconds: {{ .Values.dataproxy.terminationGracePeriodSeconds }}
initContainers:
- - name: wait-{{ .Values.manager.component }}-ready
- image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
- imagePullPolicy: {{ .Values.images.pullPolicy }}
- command: [ "/bin/sh", "-c" ]
- args:
- - |
- count={{ .Values.manager.replicaCount }}
- for i in $(seq 0 $(expr $count - 1))
- do
- replica="{{ template "inlong.fullname" . }}-{{ .Values.manager.component }}-$i"
- host="$replica.{{ template "inlong.fullname" . }}-{{ .Values.manager.component }}.{{ .Release.Namespace }}.svc.cluster.local"
- port={{ .Values.manager.port }}
- until [ $(nc -z -w 5 $host $port; echo $?) -eq 0 ]
+ - name: wait-{{ .Values.manager.component }}-ready
+ image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
+ imagePullPolicy: {{ .Values.images.pullPolicy }}
+ command: [ "/bin/sh", "-c" ]
+ args:
+ - |
+ count={{ .Values.manager.replicas }}
+ for i in $(seq 0 $(expr $count - 1))
do
- echo "waiting for $replica to be ready"
- sleep 3
+ replica="{{ template "inlong.fullname" . }}-{{ .Values.manager.component }}-$i"
+ host="$replica.{{ template "inlong.manager.hostname" . }}"
+ port={{ .Values.manager.port }}
+ until nc -z $host $port 2>/dev/null
+ do
+ echo "waiting for $replica to be ready"
+ sleep 3
+ done
done
- done
containers:
- name: {{ template "inlong.fullname" . }}-{{ .Values.dataproxy.component }}
image: {{ .Values.images.dataproxy.repository }}:{{ .Values.images.dataproxy.tag }}
imagePullPolicy: {{ .Values.images.pullPolicy }}
{{- if .Values.dataproxy.resources }}
resources:
-{{ toYaml .Values.dataproxy.resources | indent 12 }}
+ {{- toYaml .Values.dataproxy.resources | nindent 12 }}
{{- end }}
env:
- name: MANAGER_OPENAPI_IP
diff --git a/docker/kubernetes/templates/manager-statefulset.yaml b/docker/kubernetes/templates/manager-statefulset.yaml
index 68b96dfc0..f07c77615 100644
--- a/docker/kubernetes/templates/manager-statefulset.yaml
+++ b/docker/kubernetes/templates/manager-statefulset.yaml
@@ -25,28 +25,48 @@ metadata:
component: {{ .Values.manager.component }}
spec:
serviceName: {{ template "inlong.fullname" . }}-{{ .Values.manager.component }}
- replicas: {{ .Values.manager.replicaCount }}
+ replicas: {{ .Values.manager.replicas }}
selector:
matchLabels:
{{- include "inlong.matchLabels" . | nindent 6 }}
component: {{ .Values.manager.component }}
+ updateStrategy:
+ type: {{ .Values.manager.updateStrategy.type | quote }}
+ podManagementPolicy: {{ .Values.manager.podManagementPolicy | quote }}
template:
metadata:
labels:
{{- include "inlong.template.labels" . | nindent 8 }}
component: {{ .Values.manager.component }}
+ {{- if .Values.manager.annotations }}
+ annotations:
+ {{- toYaml .Values.manager.annotations | nindent 8 }}
+ {{- end }}
spec:
+ {{- if .Values.manager.tolerations }}
+ tolerations:
+ {{- toYaml .Values.manager.tolerations | nindent 8 }}
+ {{- end }}
+ {{- if .Values.manager.nodeSelector }}
+ nodeSelector:
+ {{- toYaml .Values.manager.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.manager.affinity }}
+ affinity:
+ {{- toYaml .Values.manager.affinity | nindent 8 }}
+ {{- end }}
+ terminationGracePeriodSeconds: {{ .Values.manager.terminationGracePeriodSeconds }}
containers:
- name: {{ template "inlong.fullname" . }}-{{ .Values.manager.component }}
image: {{ .Values.images.manager.repository }}:{{ .Values.images.manager.tag }}
imagePullPolicy: {{ .Values.images.pullPolicy }}
{{- if .Values.manager.resources }}
resources:
-{{ toYaml .Values.manager.resources | indent 12 }}
+ {{- toYaml .Values.manager.resources | nindent 12 }}
{{- end }}
env:
- name: JDBC_URL
- value: "{{ template "inlong.mysql.hostname" . }}:{{ .Values.mysql.ports.server }}"
+ value: "{{ template "inlong.mysql.hostname" . }}:{{ .Values.mysql.port }}"
- name: USERNAME
value: {{ include "inlong.mysql.username" . | quote }}
- name: PASSWORD
diff --git a/docker/kubernetes/templates/mysql-service.yaml b/docker/kubernetes/templates/mysql-service.yaml
index 14155c139..18f551090 100644
--- a/docker/kubernetes/templates/mysql-service.yaml
+++ b/docker/kubernetes/templates/mysql-service.yaml
@@ -29,7 +29,7 @@ spec:
ports:
- name: {{ .Values.mysql.component }}-port
protocol: TCP
- port: {{ .Values.mysql.ports.server }}
+ port: {{ .Values.mysql.port }}
targetPort: 3306
selector:
{{- include "inlong.matchLabels" . | nindent 4 }}
diff --git a/docker/kubernetes/templates/mysql-statefulset.yaml b/docker/kubernetes/templates/mysql-statefulset.yaml
index 14e58f6a4..bd6f7d6ab 100644
--- a/docker/kubernetes/templates/mysql-statefulset.yaml
+++ b/docker/kubernetes/templates/mysql-statefulset.yaml
@@ -26,24 +26,44 @@ metadata:
component: {{ .Values.mysql.component }}
spec:
serviceName: {{ template "inlong.fullname" . }}-{{ .Values.mysql.component }}
- replicas: {{ .Values.mysql.replicaCount }}
+ replicas: {{ .Values.mysql.replicas }}
selector:
matchLabels:
{{- include "inlong.matchLabels" . | nindent 6 }}
component: {{ .Values.mysql.component }}
+ updateStrategy:
+ type: {{ .Values.mysql.updateStrategy.type | quote }}
+ podManagementPolicy: {{ .Values.mysql.podManagementPolicy | quote }}
template:
metadata:
labels:
{{- include "inlong.template.labels" . | nindent 8 }}
component: {{ .Values.mysql.component }}
+ {{- if .Values.mysql.annotations }}
+ annotations:
+ {{- toYaml .Values.mysql.annotations | nindent 8 }}
+ {{- end }}
spec:
+ {{- if .Values.mysql.tolerations }}
+ tolerations:
+ {{- toYaml .Values.mysql.tolerations | nindent 8 }}
+ {{- end }}
+ {{- if .Values.mysql.nodeSelector }}
+ nodeSelector:
+ {{- toYaml .Values.mysql.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.mysql.affinity }}
+ affinity:
+ {{- toYaml .Values.mysql.affinity | nindent 8 }}
+ {{- end }}
+ terminationGracePeriodSeconds: {{ .Values.mysql.terminationGracePeriodSeconds }}
containers:
- name: {{ template "inlong.fullname" . }}-{{ .Values.mysql.component }}
image: {{ .Values.images.mysql.repository }}:{{ .Values.images.mysql.tag }}
imagePullPolicy: {{ .Values.images.pullPolicy }}
{{- if .Values.mysql.resources }}
resources:
-{{ toYaml .Values.mysql.resources | indent 12 }}
+ {{- toYaml .Values.mysql.resources | nindent 12 }}
{{- end }}
env:
- name: MYSQL_ROOT_PASSWORD
diff --git a/docker/kubernetes/templates/tubemq-broker-configmap.yaml b/docker/kubernetes/templates/tubemq-broker-configmap.yaml
index 12e9880fa..8bc01effb 100644
--- a/docker/kubernetes/templates/tubemq-broker-configmap.yaml
+++ b/docker/kubernetes/templates/tubemq-broker-configmap.yaml
@@ -45,7 +45,7 @@ data:
exit 1
fi
# get active master and register broker
- for ((i=0;i<{{ .Values.tubemqMaster.replicaCount }};i++)); do
+ for ((i=0;i<{{ .Values.tubemqMaster.replicas }};i++)); do
master="{{ template "inlong.fullname" $ }}-\
{{ .Values.tubemqMaster.component }}-$i.{{ template "inlong.fullname" $ }}-\
{{ .Values.tubemqMaster.component }}.{{ .Release.Namespace }}.svc.cluster.local"
diff --git a/docker/kubernetes/templates/tubemq-broker-service.yaml b/docker/kubernetes/templates/tubemq-broker-service.yaml
index 22cf9bbfb..030bf51a3 100644
--- a/docker/kubernetes/templates/tubemq-broker-service.yaml
+++ b/docker/kubernetes/templates/tubemq-broker-service.yaml
@@ -24,18 +24,34 @@ metadata:
{{- include "inlong.commonLabels" . | nindent 4 }}
component: {{ .Values.tubemqBroker.component }}
annotations:
-{{ toYaml .Values.tubemqBroker.service.annotations | indent 4 }}
+ {{- toYaml .Values.tubemqBroker.service.annotations | nindent 4 }}
spec:
type: {{ .Values.tubemqBroker.service.type }}
+ {{- if and (eq .Values.tubemqBroker.service.type "ClusterIP") .Values.tubemqBroker.service.clusterIP }}
+ clusterIP: {{ .Values.tubemqBroker.service.clusterIP }}
+ {{- end }}
ports:
- name: broker-web-port
protocol: TCP
port: {{ .Values.tubemqBroker.ports.webPort }}
targetPort: 8081
+ {{- if and (eq .Values.tubemqBroker.service.type "NodePort") .Values.tubemqBroker.service.webNodePort }}
+ nodePort: {{ .Values.tubemqBroker.service.webNodePort }}
+ {{- end }}
- name: broker-rpc-port
protocol: TCP
port: {{ .Values.tubemqBroker.ports.rpcPort }}
targetPort: 8123
+ {{- if and (eq .Values.tubemqBroker.service.type "LoadBalancer") .Values.tubemqBroker.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.tubemqBroker.service.loadBalancerIP }}
+ {{- end }}
+ {{- if and (eq .Values.tubemqBroker.service.type "ExternalName") .Values.tubemqBroker.service.externalName }}
+ externalName: {{ .Values.tubemqBroker.service.externalName }}
+ {{- end }}
+ {{- if .Values.tubemqBroker.service.externalIPs }}
+ externalIPs:
+ {{- toYaml .Values.tubemqBroker.service.externalIPs | nindent 4 }}
+ {{- end }}
selector:
{{- include "inlong.matchLabels" . | nindent 4 }}
component: {{ .Values.tubemqBroker.component }}
diff --git a/docker/kubernetes/templates/tubemq-broker-statefulset.yaml b/docker/kubernetes/templates/tubemq-broker-statefulset.yaml
index 8d500d658..c186753dc 100644
--- a/docker/kubernetes/templates/tubemq-broker-statefulset.yaml
+++ b/docker/kubernetes/templates/tubemq-broker-statefulset.yaml
@@ -25,51 +25,35 @@ metadata:
component: {{ .Values.tubemqBroker.component }}
spec:
serviceName: {{ template "inlong.fullname" . }}-{{ .Values.tubemqBroker.component }}
- replicas: {{ .Values.tubemqBroker.replicaCount }}
+ replicas: {{ .Values.tubemqBroker.replicas }}
selector:
matchLabels:
{{- include "inlong.matchLabels" . | nindent 6 }}
component: {{ .Values.tubemqBroker.component }}
updateStrategy:
-{{ toYaml .Values.tubemqBroker.updateStrategy | indent 4 }}
- podManagementPolicy: {{ .Values.tubemqBroker.podManagementPolicy }}
+ type: {{ .Values.tubemqBroker.updateStrategy.type | quote }}
+ podManagementPolicy: {{ .Values.tubemqBroker.podManagementPolicy | quote }}
template:
metadata:
labels:
{{- include "inlong.template.labels" . | nindent 8 }}
component: {{ .Values.tubemqBroker.component }}
+ {{- if .Values.tubemqBroker.annotations }}
annotations:
-{{ toYaml .Values.tubemqBroker.annotations | indent 8 }}
- spec:
- {{- if .Values.tubemqBroker.nodeSelector }}
- nodeSelector:
-{{ toYaml .Values.tubemqBroker.nodeSelector | indent 8 }}
+ {{- toYaml .Values.tubemqBroker.annotations | nindent 8 }}
{{- end }}
+ spec:
{{- if .Values.tubemqBroker.tolerations }}
tolerations:
-{{ toYaml .Values.tubemqBroker.tolerations | indent 8 }}
+ {{- toYaml .Values.tubemqBroker.tolerations | nindent 8 }}
{{- end }}
- initContainers:
- - name: wait-{{ .Values.tubemqMaster.component }}-ready
- image: {{ .Values.images.tubemqServer.repository }}:{{ .Values.images.tubemqServer.tag }}
- imagePullPolicy: {{ .Values.images.pullPolicy }}
- command: [ "/bin/sh", "-c" ]
- args:
- - |
- count={{ .Values.tubemqMaster.replicaCount }}
- for i in $(seq 0 $(expr $count - 1))
- do
- replica="{{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }}-$i"
- host="$replica.{{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }}.{{ .Release.Namespace }}.svc.cluster.local"
- port={{ .Values.tubemqMaster.ports.webPort }}
- until curl $host:$port/index.htm
- do
- echo "waiting for $replica to be ready"
- sleep 3
- done
- done
+ {{- if .Values.tubemqBroker.nodeSelector }}
+ nodeSelector:
+ {{- toYaml .Values.tubemqBroker.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.tubemqBroker.affinity }}
affinity:
- {{- if .Values.affinity.anti_affinity }}
+ {{- if .Values.tubemqBroker.affinity.antiAffinity }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
@@ -88,14 +72,34 @@ spec:
- {{ .Values.tubemqBroker.component }}
topologyKey: "kubernetes.io/hostname"
{{- end }}
- terminationGracePeriodSeconds: {{ .Values.tubemqBroker.gracePeriod }}
+ {{- end }}
+ terminationGracePeriodSeconds: {{ .Values.tubemqBroker.terminationGracePeriodSeconds }}
+ initContainers:
+ - name: wait-{{ .Values.tubemqMaster.component }}-ready
+ image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
+ imagePullPolicy: {{ .Values.images.pullPolicy }}
+ command: [ "/bin/sh", "-c" ]
+ args:
+ - |
+ count={{ .Values.tubemqMaster.replicas }}
+ for i in $(seq 0 $(expr $count - 1))
+ do
+ replica="{{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }}-$i"
+ host="$replica.{{ template "inlong.tubemqMaster.hostname" . }}"
+ port={{ .Values.tubemqMaster.ports.webPort }}
+ until nc -z $host $port 2>/dev/null
+ do
+ echo "waiting for $replica to be ready"
+ sleep 3
+ done
+ done
containers:
- name: {{ template "inlong.fullname" . }}-{{ .Values.tubemqBroker.component }}
image: {{ .Values.images.tubemqServer.repository }}:{{ .Values.images.tubemqServer.tag }}
imagePullPolicy: {{ .Values.images.pullPolicy }}
{{- if .Values.tubemqBroker.resources }}
resources:
-{{ toYaml .Values.tubemqBroker.resources | indent 12 }}
+ {{- toYaml .Values.tubemqBroker.resources | nindent 12 }}
{{- end }}
command: [ "/bin/sh", "-c" ]
args:
diff --git a/docker/kubernetes/templates/tubemq-manager-statefulset.yaml b/docker/kubernetes/templates/tubemq-manager-statefulset.yaml
index fd680b685..6bf1df656 100644
--- a/docker/kubernetes/templates/tubemq-manager-statefulset.yaml
+++ b/docker/kubernetes/templates/tubemq-manager-statefulset.yaml
@@ -25,17 +25,37 @@ metadata:
component: {{ .Values.tubemqManager.component }}
spec:
serviceName: {{ template "inlong.fullname" . }}-{{ .Values.tubemqManager.component }}
- replicas: {{ .Values.tubemqManager.replicaCount }}
+ replicas: {{ .Values.tubemqManager.replicas }}
selector:
matchLabels:
{{- include "inlong.matchLabels" . | nindent 6 }}
component: {{ .Values.tubemqManager.component }}
+ updateStrategy:
+ type: {{ .Values.tubemqManager.updateStrategy.type | quote }}
+ podManagementPolicy: {{ .Values.tubemqManager.podManagementPolicy | quote }}
template:
metadata:
labels:
{{- include "inlong.template.labels" . | nindent 8 }}
component: {{ .Values.tubemqManager.component }}
+ {{- if .Values.tubemqManager.annotations }}
+ annotations:
+ {{- toYaml .Values.tubemqManager.annotations | nindent 8 }}
+ {{- end }}
spec:
+ {{- if .Values.tubemqManager.tolerations }}
+ tolerations:
+ {{- toYaml .Values.tubemqManager.tolerations | nindent 8 }}
+ {{- end }}
+ {{- if .Values.tubemqManager.nodeSelector }}
+ nodeSelector:
+ {{- toYaml .Values.tubemqManager.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.tubemqManager.affinity }}
+ affinity:
+ {{- toYaml .Values.tubemqManager.affinity | nindent 8 }}
+ {{- end }}
+ terminationGracePeriodSeconds: {{ .Values.tubemqManager.terminationGracePeriodSeconds }}
initContainers:
- name: wait-{{ .Values.mysql.component }}-ready
image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
@@ -43,13 +63,13 @@ spec:
command: [ "/bin/sh", "-c" ]
args:
- |
- count={{ .Values.mysql.replicaCount }}
+ count={{ .Values.mysql.replicas }}
for i in $(seq 0 $(expr $count - 1))
do
replica="{{ template "inlong.fullname" . }}-{{ .Values.mysql.component }}-$i"
- host="$replica.{{ template "inlong.fullname" . }}-{{ .Values.mysql.component }}.{{ .Release.Namespace }}.svc.cluster.local"
- port={{ .Values.mysql.ports.server }}
- until [ $(nc -z -w 5 $host $port; echo $?) -eq 0 ]
+ host="$replica.{{ template "inlong.mysql.hostname" . }}"
+ port={{ .Values.mysql.port }}
+ until nc -z $host $port 2>/dev/null
do
echo "waiting for $replica to be ready"
sleep 3
@@ -61,13 +81,13 @@ spec:
command: [ "/bin/sh", "-c" ]
args:
- |
- count={{ .Values.tubemqMaster.replicaCount }}
+ count={{ .Values.tubemqMaster.replicas }}
for i in $(seq 0 $(expr $count - 1))
do
replica="{{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }}-$i"
- host="$replica.{{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }}.{{ .Release.Namespace }}.svc.cluster.local"
+ host="$replica.{{ template "inlong.tubemqMaster.hostname" . }}"
port={{ .Values.tubemqMaster.ports.webPort }}
- until curl $host:$port/index.htm
+ until nc -z $host $port 2>/dev/null
do
echo "waiting for $replica to be ready"
sleep 3
@@ -79,7 +99,7 @@ spec:
imagePullPolicy: {{ .Values.images.pullPolicy }}
{{- if .Values.tubemqManager.resources }}
resources:
-{{ toYaml .Values.tubemqManager.resources | indent 12 }}
+ {{- toYaml .Values.tubemqManager.resources | nindent 12 }}
{{- end }}
env:
- name: MYSQL_HOST
diff --git a/docker/kubernetes/templates/tubemq-master-service.yaml b/docker/kubernetes/templates/tubemq-master-service.yaml
index 4ad78149d..83974ab15 100644
--- a/docker/kubernetes/templates/tubemq-master-service.yaml
+++ b/docker/kubernetes/templates/tubemq-master-service.yaml
@@ -24,14 +24,20 @@ metadata:
{{- include "inlong.commonLabels" . | nindent 4 }}
component: {{ .Values.tubemqMaster.component }}
annotations:
-{{ toYaml .Values.tubemqMaster.service.annotations | indent 4 }}
+ {{- toYaml .Values.tubemqMaster.service.annotations | nindent 4 }}
spec:
type: {{ .Values.tubemqMaster.service.type }}
+ {{- if and (eq .Values.tubemqMaster.service.type "ClusterIP") .Values.tubemqMaster.service.clusterIP }}
+ clusterIP: {{ .Values.tubemqMaster.service.clusterIP }}
+ {{- end }}
ports:
- name: mstr-web-port
protocol: TCP
port: {{ .Values.tubemqMaster.ports.webPort }}
targetPort: 8080
+ {{- if and (eq .Values.tubemqMaster.service.type "NodePort") .Values.tubemqMaster.service.webNodePort }}
+ nodePort: {{ .Values.tubemqMaster.service.webNodePort }}
+ {{- end }}
- name: mstr-help-port
protocol: TCP
port: {{ .Values.tubemqMaster.ports.helpPort }}
@@ -40,6 +46,16 @@ spec:
protocol: TCP
port: {{ .Values.tubemqMaster.ports.rpcPort }}
targetPort: 8715
+ {{- if and (eq .Values.tubemqMaster.service.type "LoadBalancer") .Values.tubemqMaster.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.tubemqMaster.service.loadBalancerIP }}
+ {{- end }}
+ {{- if and (eq .Values.tubemqMaster.service.type "ExternalName") .Values.tubemqMaster.service.externalName }}
+ externalName: {{ .Values.tubemqMaster.service.externalName }}
+ {{- end }}
+ {{- if .Values.tubemqMaster.service.externalIPs }}
+ externalIPs:
+ {{- toYaml .Values.tubemqMaster.service.externalIPs | nindent 4 }}
+ {{- end }}
selector:
{{- include "inlong.matchLabels" . | nindent 4 }}
component: {{ .Values.tubemqMaster.component }}
diff --git a/docker/kubernetes/templates/tubemq-master-statefulset.yaml b/docker/kubernetes/templates/tubemq-master-statefulset.yaml
index f52e77ca6..f1690dac0 100644
--- a/docker/kubernetes/templates/tubemq-master-statefulset.yaml
+++ b/docker/kubernetes/templates/tubemq-master-statefulset.yaml
@@ -25,51 +25,35 @@ metadata:
component: {{ .Values.tubemqMaster.component }}
spec:
serviceName: {{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }}
- replicas: {{ .Values.tubemqMaster.replicaCount }}
+ replicas: {{ .Values.tubemqMaster.replicas }}
selector:
matchLabels:
{{- include "inlong.matchLabels" . | nindent 6 }}
component: {{ .Values.tubemqMaster.component }}
updateStrategy:
-{{ toYaml .Values.tubemqMaster.updateStrategy | indent 4 }}
- podManagementPolicy: {{ .Values.tubemqMaster.podManagementPolicy }}
+ type: {{ .Values.tubemqMaster.updateStrategy.type | quote }}
+ podManagementPolicy: {{ .Values.tubemqMaster.podManagementPolicy | quote }}
template:
metadata:
labels:
{{- include "inlong.template.labels" . | nindent 8 }}
component: {{ .Values.tubemqMaster.component }}
+ {{- if .Values.tubemqMaster.annotations }}
annotations:
-{{ toYaml .Values.tubemqMaster.annotations | indent 8 }}
- spec:
- {{- if .Values.tubemqMaster.nodeSelector }}
- nodeSelector:
-{{ toYaml .Values.tubemqMaster.nodeSelector | indent 8 }}
+ {{- toYaml .Values.tubemqMaster.annotations | nindent 8 }}
{{- end }}
+ spec:
{{- if .Values.tubemqMaster.tolerations }}
tolerations:
-{{ toYaml .Values.tubemqMaster.tolerations | indent 8 }}
+ {{- toYaml .Values.tubemqMaster.tolerations | nindent 8 }}
{{- end }}
- initContainers:
- - name: wait-{{ .Values.zookeeper.component }}-ready
- image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
- imagePullPolicy: {{ .Values.images.pullPolicy }}
- command: [ "/bin/sh", "-c" ]
- args:
- - |
- count={{ .Values.zookeeper.replicaCount }}
- for i in $(seq 0 $(expr $count - 1))
- do
- replica="{{ template "inlong.fullname" . }}-{{ .Values.zookeeper.component }}-$i"
- host="$replica.{{ template "inlong.fullname" . }}-{{ .Values.zookeeper.component }}.{{ .Release.Namespace }}.svc.cluster.local"
- port={{ .Values.zookeeper.ports.client }}
- until [ $(echo ruok | nc $host $port ) = 'imok' ]
- do
- echo "waiting for $replica to be ready"
- sleep 3
- done
- done
+ {{- if .Values.tubemqMaster.nodeSelector }}
+ nodeSelector:
+ {{- toYaml .Values.tubemqMaster.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.tubemqMaster.affinity }}
affinity:
- {{- if .Values.affinity.anti_affinity }}
+ {{- if .Values.tubemqMaster.affinity.antiAffinity }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
@@ -88,14 +72,34 @@ spec:
- {{ .Values.tubemqMaster.component }}
topologyKey: "kubernetes.io/hostname"
{{- end }}
- terminationGracePeriodSeconds: {{ .Values.tubemqMaster.gracePeriod }}
+ {{- end }}
+ terminationGracePeriodSeconds: {{ .Values.tubemqMaster.terminationGracePeriodSeconds }}
+ initContainers:
+ - name: wait-{{ .Values.zookeeper.component }}-ready
+ image: {{ .Values.images.initContainer.repository }}:{{ .Values.images.initContainer.tag }}
+ imagePullPolicy: {{ .Values.images.pullPolicy }}
+ command: [ "/bin/sh", "-c" ]
+ args:
+ - |
+ count={{ .Values.zookeeper.replicas }}
+ for i in $(seq 0 $(expr $count - 1))
+ do
+ replica="{{ template "inlong.fullname" . }}-{{ .Values.zookeeper.component }}-$i"
+ host="$replica.{{ template "inlong.zookeeper.hostname" . }}"
+ port={{ .Values.zookeeper.ports.client }}
+ until [ $(echo ruok | nc $host $port ) = 'imok' ]
+ do
+ echo "waiting for $replica to be ready"
+ sleep 3
+ done
+ done
containers:
- name: {{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }}
image: {{ .Values.images.tubemqServer.repository }}:{{ .Values.images.tubemqServer.tag }}
imagePullPolicy: {{ .Values.images.pullPolicy }}
{{- if .Values.tubemqMaster.resources }}
resources:
-{{ toYaml .Values.tubemqMaster.resources | indent 12 }}
+ {{- toYaml .Values.tubemqMaster.resources | nindent 12 }}
{{- end }}
command: [ "/bin/sh", "-c" ]
args:
diff --git a/docker/kubernetes/templates/zookeeper-service.yaml b/docker/kubernetes/templates/zookeeper-service.yaml
index 1f9d49843..297bdfd27 100644
--- a/docker/kubernetes/templates/zookeeper-service.yaml
+++ b/docker/kubernetes/templates/zookeeper-service.yaml
@@ -24,7 +24,7 @@ metadata:
{{- include "inlong.commonLabels" . | nindent 4 }}
component: {{ .Values.zookeeper.component }}
annotations:
-{{ toYaml .Values.zookeeper.service.annotations | indent 4 }}
+ {{- toYaml .Values.zookeeper.service.annotations | nindent 4 }}
spec:
clusterIP: None
ports:
diff --git a/docker/kubernetes/templates/zookeeper-statefulset.yaml b/docker/kubernetes/templates/zookeeper-statefulset.yaml
index 88ae86de0..cc3b2f727 100644
--- a/docker/kubernetes/templates/zookeeper-statefulset.yaml
+++ b/docker/kubernetes/templates/zookeeper-statefulset.yaml
@@ -25,32 +25,35 @@ metadata:
component: {{ .Values.zookeeper.component }}
spec:
serviceName: {{ template "inlong.fullname" . }}-{{ .Values.zookeeper.component }}
- replicas: {{ .Values.zookeeper.replicaCount }}
+ replicas: {{ .Values.zookeeper.replicas }}
selector:
matchLabels:
{{- include "inlong.matchLabels" . | nindent 6 }}
component: {{ .Values.zookeeper.component }}
updateStrategy:
-{{ toYaml .Values.zookeeper.updateStrategy | indent 4 }}
- podManagementPolicy: {{ .Values.zookeeper.podManagementPolicy }}
+ type: {{ .Values.zookeeper.updateStrategy.type | quote }}
+ podManagementPolicy: {{ .Values.zookeeper.podManagementPolicy | quote }}
template:
metadata:
labels:
{{- include "inlong.template.labels" . | nindent 8 }}
component: {{ .Values.zookeeper.component }}
+ {{- if .Values.zookeeper.annotations }}
annotations:
-{{ toYaml .Values.zookeeper.annotations | indent 8 }}
- spec:
- {{- if .Values.zookeeper.nodeSelector }}
- nodeSelector:
-{{ toYaml .Values.zookeeper.nodeSelector | indent 8 }}
+ {{- toYaml .Values.zookeeper.annotations | nindent 8 }}
{{- end }}
+ spec:
{{- if .Values.zookeeper.tolerations }}
tolerations:
-{{ toYaml .Values.zookeeper.tolerations | indent 8 }}
+ {{- toYaml .Values.zookeeper.tolerations | nindent 8 }}
{{- end }}
+ {{- if .Values.zookeeper.nodeSelector }}
+ nodeSelector:
+ {{- toYaml .Values.zookeeper.nodeSelector | nindent 8 }}
+ {{- end }}
+ {{- if .Values.zookeeper.affinity }}
affinity:
- {{- if .Values.affinity.anti_affinity }}
+ {{- if .Values.zookeeper.affinity.antiAffinity }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
@@ -69,14 +72,15 @@ spec:
- {{ .Values.zookeeper.component }}
topologyKey: "kubernetes.io/hostname"
{{- end }}
- terminationGracePeriodSeconds: {{ .Values.zookeeper.gracePeriod }}
+ {{- end }}
+ terminationGracePeriodSeconds: {{ .Values.zookeeper.terminationGracePeriodSeconds }}
containers:
- name: {{ template "inlong.fullname" . }}-{{ .Values.zookeeper.component }}
image: {{ .Values.images.tubemqServer.repository }}:{{ .Values.images.tubemqServer.tag }}
imagePullPolicy: {{ .Values.images.pullPolicy }}
{{- if .Values.zookeeper.resources }}
resources:
-{{ toYaml .Values.zookeeper.resources | indent 12 }}
+ {{- toYaml .Values.zookeeper.resources | nindent 12 }}
{{- end }}
command: [ "/bin/sh", "-c" ]
args:
@@ -93,7 +97,7 @@ spec:
- name: ZOOKEEPER_SERVERS
value:
{{- $global := . }}
- {{ range $i, $e := until (.Values.zookeeper.replicaCount | int) }}{{ if ne $i 0 }},{{ end }}{{ template "inlong.fullname" $global }}-{{ $global.Values.zookeeper.component }}-{{ printf "%d" $i }}{{ end }}
+ {{ range $i, $e := until (.Values.zookeeper.replicas | int) }}{{ if ne $i 0 }},{{ end }}{{ template "inlong.fullname" $global }}-{{ $global.Values.zookeeper.component }}-{{ printf "%d" $i }}{{ end }}
{{- if .Values.zookeeper.probe.readiness.enabled }}
readinessProbe:
exec:
diff --git a/docker/kubernetes/values.yaml b/docker/kubernetes/values.yaml
index 67bab6b94..ee9ee01fc 100644
--- a/docker/kubernetes/values.yaml
+++ b/docker/kubernetes/values.yaml
@@ -54,16 +54,39 @@ volumes:
persistence: false
storageClassName: "local-storage"
-affinity:
- anti_affinity: false
-
ingress:
enabled: false
hosts:
agent:
component: "agent"
- replicaCount: 1
+ replicas: 1
+ # The updateStrategy field allows you to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
+ # There are two possible values: OnDelete and RollingUpdate.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+ updateStrategy:
+ type: "RollingUpdate"
+ # StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
+ podManagementPolicy: "OrderedReady"
+ # You can use annotations to attach arbitrary non-identifying metadata to objects.
+ # Clients such as tools and libraries can retrieve this metadata.
+ # For more details, please check out https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+ annotations: {}
+ # Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ tolerations: []
+ # nodeSelector is the simplest recommended form of node selection constraint.
+ # You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.
+ # Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
+ nodeSelector: {}
+ # Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
+ affinity: {}
+ # Optional duration in seconds the pod needs to terminate gracefully.
+ # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
+ terminationGracePeriodSeconds: 30
resources:
requests:
cpu: 1
@@ -80,7 +103,33 @@ agent:
dashboard:
component: "dashboard"
- replicaCount: 1
+ replicas: 1
+ # The updateStrategy field allows you to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
+ # There are two possible values: OnDelete and RollingUpdate.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+ updateStrategy:
+ type: "RollingUpdate"
+ # StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
+ podManagementPolicy: "OrderedReady"
+ # You can use annotations to attach arbitrary non-identifying metadata to objects.
+ # Clients such as tools and libraries can retrieve this metadata.
+ # For more details, please check out https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+ annotations: {}
+ # Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ tolerations: []
+ # nodeSelector is the simplest recommended form of node selection constraint.
+ # You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.
+ # Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
+ nodeSelector: {}
+ # Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
+ affinity: {}
+ # Optional duration in seconds the pod needs to terminate gracefully.
+ # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
+ terminationGracePeriodSeconds: 30
resources:
requests:
cpu: 1
@@ -103,7 +152,33 @@ dashboard:
dataproxy:
component: "dataproxy"
- replicaCount: 1
+ replicas: 1
+ # The updateStrategy field allows you to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
+ # There are two possible values: OnDelete and RollingUpdate.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+ updateStrategy:
+ type: "RollingUpdate"
+ # StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
+ podManagementPolicy: "OrderedReady"
+ # You can use annotations to attach arbitrary non-identifying metadata to objects.
+ # Clients such as tools and libraries can retrieve this metadata.
+ # For more details, please check out https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+ annotations: {}
+ # Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ tolerations: []
+ # nodeSelector is the simplest recommended form of node selection constraint.
+ # You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.
+ # Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
+ nodeSelector: {}
+ # Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
+ affinity: {}
+ # Optional duration in seconds the pod needs to terminate gracefully.
+ # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
+ terminationGracePeriodSeconds: 30
resources:
requests:
cpu: 1
@@ -132,7 +207,33 @@ dataproxy:
tubemqManager:
component: "tubemq-manager"
- replicaCount: 1
+ replicas: 1
+ # The updateStrategy field allows you to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
+ # There are two possible values: OnDelete and RollingUpdate.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+ updateStrategy:
+ type: "RollingUpdate"
+ # StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
+ podManagementPolicy: "OrderedReady"
+ # You can use annotations to attach arbitrary non-identifying metadata to objects.
+ # Clients such as tools and libraries can retrieve this metadata.
+ # For more details, please check out https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+ annotations: {}
+ # Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ tolerations: []
+ # nodeSelector is the simplest recommended form of node selection constraint.
+ # You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.
+ # Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
+ nodeSelector: {}
+ # Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
+ affinity: {}
+ # Optional duration in seconds the pod needs to terminate gracefully.
+ # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
+ terminationGracePeriodSeconds: 30
resources:
requests:
cpu: 1
@@ -148,7 +249,33 @@ tubemqManager:
manager:
component: "manager"
- replicaCount: 1
+ replicas: 1
+ # The updateStrategy field allows you to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
+ # There are two possible values: OnDelete and RollingUpdate.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+ updateStrategy:
+ type: "RollingUpdate"
+ # StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
+ podManagementPolicy: "OrderedReady"
+ # You can use annotations to attach arbitrary non-identifying metadata to objects.
+ # Clients such as tools and libraries can retrieve this metadata.
+ # For more details, please check out https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+ annotations: {}
+ # Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ tolerations: []
+ # nodeSelector is the simplest recommended form of node selection constraint.
+ # You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.
+ # Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
+ nodeSelector: {}
+ # Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
+ affinity: {}
+ # Optional duration in seconds the pod needs to terminate gracefully.
+ # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
+ terminationGracePeriodSeconds: 30
resources:
requests:
cpu: 1
@@ -179,7 +306,33 @@ manager:
audit:
component: "audit"
- replicaCount: 1
+ replicas: 1
+ # The updateStrategy field allows you to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
+ # There are two possible values: OnDelete and RollingUpdate.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+ updateStrategy:
+ type: "RollingUpdate"
+ # StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
+ podManagementPolicy: "OrderedReady"
+ # You can use annotations to attach arbitrary non-identifying metadata to objects.
+ # Clients such as tools and libraries can retrieve this metadata.
+ # For more details, please check out https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+ annotations: {}
+ # Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ tolerations: []
+ # nodeSelector is the simplest recommended form of node selection constraint.
+ # You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.
+ # Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
+ nodeSelector: {}
+ # Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
+ affinity: {}
+ # Optional duration in seconds the pod needs to terminate gracefully.
+ # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
+ terminationGracePeriodSeconds: 30
resources:
requests:
cpu: 1
@@ -195,25 +348,76 @@ audit:
# If not exists external MySQL, InLong will use it.
mysql:
component: "mysql"
- replicaCount: 1
+ replicas: 1
+ # The updateStrategy field allows you to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
+ # There are two possible values: OnDelete and RollingUpdate.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+ updateStrategy:
+ type: "RollingUpdate"
+ # StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
+ podManagementPolicy: "OrderedReady"
+ # You can use annotations to attach arbitrary non-identifying metadata to objects.
+ # Clients such as tools and libraries can retrieve this metadata.
+ # For more details, please check out https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+ annotations: {}
+ # Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ tolerations: []
+ # nodeSelector is the simplest recommended form of node selection constraint.
+ # You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.
+ # Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
+ nodeSelector: {}
+ # Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
+ affinity: {}
+ # Optional duration in seconds the pod needs to terminate gracefully.
+ # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
+ terminationGracePeriodSeconds: 30
resources:
requests:
cpu: 1
memory: "1Gi"
username: "root"
password: "inlong"
- ports:
- server: 3306
+ port: 3306
volumes:
name: data
size: "10Gi"
zookeeper:
component: "zookeeper"
- replicaCount: 3
+ replicas: 3
+ # The updateStrategy field allows you to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
+ # There are two possible values: OnDelete and RollingUpdate.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: "RollingUpdate"
+ # StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
podManagementPolicy: "OrderedReady"
+ # You can use annotations to attach arbitrary non-identifying metadata to objects.
+ # Clients such as tools and libraries can retrieve this metadata.
+ # For more details, please check out https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+ annotations:
+ prometheus.io/scrape: "true"
+ prometheus.io/port: "8000"
+ # Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ tolerations: []
+ # nodeSelector is the simplest recommended form of node selection constraint.
+ # You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.
+ # Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
+ nodeSelector: {}
+ # Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
+ affinity:
+ antiAffinity: false
+ # Optional duration in seconds the pod needs to terminate gracefully.
+ # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
+ terminationGracePeriodSeconds: 30
ports:
client: 2181
follower: 2888
@@ -229,11 +433,6 @@ zookeeper:
failureThreshold: 10
initialDelaySeconds: 10
periodSeconds: 30
- annotations:
- prometheus.io/scrape: "true"
- prometheus.io/port: "8000"
- tolerations: []
- gracePeriod: 30
resources:
requests:
cpu: 1
@@ -250,10 +449,34 @@ zookeeper:
tubemqMaster:
component: "tubemq-master"
- replicaCount: 1
+ replicas: 1
+ # The updateStrategy field allows you to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
+ # There are two possible values: OnDelete and RollingUpdate.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: "RollingUpdate"
+ # StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
podManagementPolicy: "OrderedReady"
+ # You can use annotations to attach arbitrary non-identifying metadata to objects.
+ # Clients such as tools and libraries can retrieve this metadata.
+ # For more details, please check out https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+ annotations: {}
+ # Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ tolerations: []
+ # nodeSelector is the simplest recommended form of node selection constraint.
+ # You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.
+ # Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
+ nodeSelector: {}
+ # Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
+ affinity:
+ antiAffinity: false
+ # Optional duration in seconds the pod needs to terminate gracefully.
+ # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
+ terminationGracePeriodSeconds: 30
ports:
rpcPort: 8715
webPort: 8080
@@ -269,8 +492,6 @@ tubemqMaster:
failureThreshold: 10
initialDelaySeconds: 10
periodSeconds: 30
- tolerations: []
- gracePeriod: 30
resources:
requests:
cpu: 1
@@ -279,9 +500,21 @@ tubemqMaster:
name: data
size: "10Gi"
service:
- type: LoadBalancer
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
+ # type determines how the service is exposed. Defaults to NodePort. Valid options are ClusterIP, NodePort, LoadBalancer, and ExternalName
+ type: NodePort
+ # clusterIP is the IP address of the service and is usually assigned randomly by the master when service type is ClusterIP
+ clusterIP:
+ # webNodePort is the web port on each node on which this service is exposed when service type is NodePort
+ # the range of valid ports is 30000-32767
+ webNodePort: 30880
+ # when service type is LoadBalancer, LoadBalancer will get created with the IP specified in this field
+ loadBalancerIP:
+ # externalName is the external reference that kubedns or equivalent will return as a CNAME record for this service, requires service type to be ExternalName
+ externalName:
+ # externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service
+ externalIPs:
pdb:
usePolicy: true
maxUnavailable: 1
@@ -294,10 +527,34 @@ tubemqMaster:
tubemqBroker:
component: "tubemq-broker"
- replicaCount: 1
+ replicas: 1
+ # The updateStrategy field allows you to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
+ # There are two possible values: OnDelete and RollingUpdate.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: "RollingUpdate"
+ # StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
+ # For more details, please check out https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
podManagementPolicy: "OrderedReady"
+ # You can use annotations to attach arbitrary non-identifying metadata to objects.
+ # Clients such as tools and libraries can retrieve this metadata.
+ # For more details, please check out https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+ annotations: {}
+ # Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ tolerations: []
+ # nodeSelector is the simplest recommended form of node selection constraint.
+ # You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.
+ # Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
+ nodeSelector: {}
+ # Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels.
+ # For more details, please check out https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
+ affinity:
+ antiAffinity: false
+ # Optional duration in seconds the pod needs to terminate gracefully.
+ # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
+ terminationGracePeriodSeconds: 30
ports:
rpcPort: 8123
webPort: 8081
@@ -312,8 +569,6 @@ tubemqBroker:
failureThreshold: 10
initialDelaySeconds: 10
periodSeconds: 30
- tolerations: []
- gracePeriod: 30
resources:
requests:
cpu: 1
@@ -322,9 +577,21 @@ tubemqBroker:
name: data
size: "10Gi"
service:
- type: LoadBalancer
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
+ # type determines how the service is exposed. Defaults to NodePort. Valid options are ClusterIP, NodePort, LoadBalancer, and ExternalName
+ type: NodePort
+ # clusterIP is the IP address of the service and is usually assigned randomly by the master when service type is ClusterIP
+ clusterIP:
+ # webNodePort is the web port on each node on which this service is exposed when service type is NodePort
+ # the range of valid ports is 30000-32767
+ webNodePort: 30881
+ # when service type is LoadBalancer, LoadBalancer will get created with the IP specified in this field
+ loadBalancerIP:
+ # externalName is the external reference that kubedns or equivalent will return as a CNAME record for this service, requires service type to be ExternalName
+ externalName:
+ # externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service
+ externalIPs:
pdb:
usePolicy: true
maxUnavailable: 1