You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@skywalking.apache.org by ha...@apache.org on 2020/03/05 10:49:53 UTC
[skywalking-kubernetes] branch master updated: 6.6.0 (#38)
This is an automated email from the ASF dual-hosted git repository.
hanahmily pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/skywalking-kubernetes.git
The following commit(s) were added to refs/heads/master by this push:
new 5c42031 6.6.0 (#38)
5c42031 is described below
commit 5c4203183a7970b1cfefd410ac9349e17265233d
Author: Gao Hongtao <ha...@gmail.com>
AuthorDate: Thu Mar 5 18:49:46 2020 +0800
6.6.0 (#38)
* 6.6.0
* 6.6.0 tested
---
README.md | 27 +-
chart/skywalking/Chart.yaml | 8 +-
chart/skywalking/README.md | 136 ++++-----
chart/skywalking/templates/NOTES.txt | 2 +-
chart/skywalking/templates/_helpers.tpl | 9 +-
chart/skywalking/templates/es-init.job.yaml | 4 +-
chart/skywalking/templates/oap-deployment.yaml | 2 +-
chart/skywalking/values-es6.yaml | 348 +++++++++++++++++++++
chart/skywalking/values.yaml | 404 +++++++++++++++----------
9 files changed, 684 insertions(+), 256 deletions(-)
diff --git a/README.md b/README.md
index 6a2c5b9..8cbd7d1 100644
--- a/README.md
+++ b/README.md
@@ -6,16 +6,24 @@ Apache SkyWalking Kubernetes
To install and configure skywalking in a Kubernetes cluster, follow these instructions.
## Documentation
-#### Deploy SkyWalking and Elasticsearch (default)
+#### Deploy SkyWalking and Elasticsearch 7 (default)
```shell script
$ cd chart
-$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
+$ helm repo add elastic https://helm.elastic.co
$ helm dep up skywalking
$ helm install <release_name> skywalking -n <namespace>
+```
+
+**Note**: If you want to deploy Elasticsearch 6, execute the following command
+
+```shell script
+$ helm dep up skywalking
+
+$ helm install <release_name> skywalking -n <namespace> --values ./skywalking/values-es6.yaml
```
#### Only deploy SkyWalking ,and use existing Elasticsearch
@@ -26,11 +34,23 @@ Only need to close the elasticsearch deployed by chart default and configure the
```shell script
$ cd chart
-$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
+$ helm repo add elastic https://helm.elastic.co
+
+$ helm dep up skywalking
+$ helm install <release_name> skywalking -n <namespace> \
+ --set elasticsearch.enabled=false \
+ --set elasticsearch.config.host=<es_host> \
+ --set elasticsearch.config.port.http=<es_port>
+```
+
+**Note**: You need to make sure your ES cluster version is 7.x , If your cluster version is 6.x, execute the following command
+
+```shell script
$ helm dep up skywalking
$ helm install <release_name> skywalking -n <namespace> \
+ --values ./skywalking/values-es6.yaml
--set elasticsearch.enabled=false \
--set elasticsearch.config.host=<es_host> \
--set elasticsearch.config.port.http=<es_port>
@@ -46,6 +66,7 @@ This is recommended as the best practice to deploy SkyWalking backend stack into
| SkyWalking version | Chart version |
| ------------------ | ------------- |
| 6.5.0 | 1.0.0 |
+| 6.6.0 | 1.1.0 |
Note: The source code for the release chart is located in the chart folder in the master branch.
diff --git a/chart/skywalking/Chart.yaml b/chart/skywalking/Chart.yaml
index 8ab6ef0..4d841d5 100644
--- a/chart/skywalking/Chart.yaml
+++ b/chart/skywalking/Chart.yaml
@@ -16,8 +16,8 @@
apiVersion: v2
name: skywalking
home: https://skywalking.apache.org
-version: 1.0.0
-appVersion: 6.5.0
+version: 1.1.0
+appVersion: 6.6.0
description: Apache SkyWalking APM System
icon: https://raw.githubusercontent.com/apache/skywalking-kubernetes/master/logo/sw-logo-for-chart.jpg
sources:
@@ -30,6 +30,6 @@ maintainers:
dependencies:
- name: elasticsearch
- version: ~1.32.0
- repository: https://kubernetes-charts.storage.googleapis.com/
+ version: ~7.5.1
+ repository: https://helm.elastic.co/
condition: elasticsearch.enabled
\ No newline at end of file
diff --git a/chart/skywalking/README.md b/chart/skywalking/README.md
index 9320bf4..b1c380c 100644
--- a/chart/skywalking/README.md
+++ b/chart/skywalking/README.md
@@ -76,75 +76,67 @@ The following table lists the configurable parameters of the Skywalking chart an
| `ui.service.annotations` | Kubernetes service annotations | `{}` |
| `ui.service.loadBalancerSourceRanges` | Limit load balancer source IPs to list of CIDRs (where available)) | `[]` |
| `elasticsearch.enabled` | Spin up a new elasticsearch cluster for SkyWalking | `true` |
-| `elasticsearch.client.name` | Client component name | `client` |
-| `elasticsearch.client.replicas` | Client node replicas (deployment) | `2` |
-| `elasticsearch.client.resources` | Client node resources requests & limits | `{} - cpu limit must be an integer` |
-| `elasticsearch.client.priorityClassName` | Client priorityClass | `nil` |
-| `elasticsearch.client.heapSize` | Client node heap size | `512m` |
-| `elasticsearch.client.podAnnotations` | Client Deployment annotations | `{}` |
-| `elasticsearch.client.nodeSelector` | Node labels for client pod assignment | `{}` |
-| `elasticsearch.client.tolerations` | Client tolerations | `[]` |
-| `elasticsearch.client.serviceAnnotations` | Client Service annotations | `{}` |
-| `elasticsearch.client.serviceType` | Client service type | `ClusterIP` |
-| `elasticsearch.client.httpNodePort` | Client service HTTP NodePort port number. Has no effect if client.serviceType is not `NodePort`.| `nil` |
-| `elasticsearch.client.loadBalancerIP` | Client loadBalancerIP | `{}` |
-| `elasticsearch.client.loadBalancerSourceRanges` | Client loadBalancerSourceRanges | `{}` |
-| `elasticsearch.client.antiAffinity` | Client anti-affinity policy | `soft` |
-| `elasticsearch.client.nodeAffinity` | Client node affinity policy | `{}` |
-| `elasticsearch.client.initResources` | Client initContainer resources requests & limits | `{}` |
-| `elasticsearch.client.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for client | `""` |
-| `elasticsearch.client.ingress.enabled` | Enable Client Ingress | `false` |
-| `elasticsearch.client.ingress.user` | If this & password are set, enable basic-auth on ingress | `nil` |
-| `elasticsearch.client.ingress.password` | If this & user are set, enable basic-auth on ingress | `nil` |
-| `elasticsearch.client.ingress.annotations` | Client Ingress annotations | `{}` |
-| `elasticsearch.client.ingress.hosts` | Client Ingress Hostnames | `[]` |
-| `elasticsearch.client.ingress.tls` | Client Ingress TLS configuration | `[]` |
-| `elasticsearch.client.exposeTransportPort` | Expose transport port 9300 on client service (ClusterIP) | `false` |
-| `elasticsearch.master.initResources` | Master initContainer resources requests & limits | `{}` |
-| `elasticsearch.master.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for master | `""` |
-| `elasticsearch.master.exposeHttp` | Expose http port 9200 on master Pods for monitoring, etc | `false` |
-| `elasticsearch.master.name` | Master component name | `master` |
-| `elasticsearch.master.replicas` | Master node replicas (deployment) | `2` |
-| `elasticsearch.master.resources` | Master node resources requests & limits | `{} - cpu limit must be an integer` |
-| `elasticsearch.master.priorityClassName` | Master priorityClass | `nil` |
-| `elasticsearch.master.podAnnotations` | Master Deployment annotations | `{}` |
-| `elasticsearch.master.nodeSelector` | Node labels for master pod assignment | `{}` |
-| `elasticsearch.master.tolerations` | Master tolerations | `[]` |
-| `elasticsearch.master.heapSize` | Master node heap size | `512m` |
-| `elasticsearch.master.name` | Master component name | `master` |
-| `elasticsearch.master.persistence.enabled` | Master persistent enabled/disabled | `false` |
-| `elasticsearch.master.persistence.name` | Master statefulset PVC template name | `data` |
-| `elasticsearch.master.persistence.size` | Master persistent volume size | `4Gi` |
-| `elasticsearch.master.persistence.storageClass` | Master persistent volume Class | `nil` |
-| `elasticsearch.master.persistence.accessMode` | Master persistent Access Mode | `ReadWriteOnce` |
-| `elasticsearch.master.readinessProbe` | Master container readiness probes | see `values.yaml` for defaults |
-| `elasticsearch.master.antiAffinity` | Master anti-affinity policy | `soft` |
-| `elasticsearch.master.nodeAffinity` | Master node affinity policy | `{}` |
-| `elasticsearch.master.podManagementPolicy` | Master pod creation strategy | `OrderedReady` |
-| `elasticsearch.master.updateStrategy` | Master node update strategy policy | `{type: "onDelete"}` |
-| `elasticsearch.data.initResources` | Data initContainer resources requests & limits | `{}` |
-| `elasticsearch.data.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for data | `""` |
-| `elasticsearch.data.exposeHttp` | Expose http port 9200 on data Pods for monitoring, etc | `false` |
-| `elasticsearch.data.replicas` | Data node replicas (statefulset) | `2` |
-| `elasticsearch.data.resources` | Data node resources requests & limits | `{} - cpu limit must be an integer` |
-| `elasticsearch.data.priorityClassName` | Data priorityClass | `nil` |
-| `elasticsearch.data.heapSize` | Data node heap size | `1536m` |
-| `elasticsearch.data.hooks.drain.enabled` | Data nodes: Enable drain pre-stop and post-start hook | `true` |
-| `elasticsearch.data.persistence.enabled` | Data persistent enabled/disabled | `false` |
-| `elasticsearch.data.persistence.name` | Data statefulset PVC template name | `data` |
-| `elasticsearch.data.persistence.size` | Data persistent volume size | `30Gi` |
-| `elasticsearch.data.persistence.storageClass` | Data persistent volume Class | `nil` |
-| `elasticsearch.data.persistence.accessMode` | Data persistent Access Mode | `ReadWriteOnce` |
-| `elasticsearch.data.readinessProbe` | Readiness probes for data-containers | see `values.yaml` for defaults |
-| `elasticsearch.data.podAnnotations` | Data StatefulSet annotations | `{}` |
-| `elasticsearch.data.nodeSelector` | Node labels for data pod assignment | `{}` |
-| `elasticsearch.data.tolerations` | Data tolerations | `[]` |
-| `elasticsearch.data.terminationGracePeriodSeconds` | Data termination grace period (seconds) | `3600` |
-| `elasticsearch.data.antiAffinity` | Data anti-affinity policy | `soft` |
-| `elasticsearch.data.nodeAffinity` | Data node affinity policy | `{}` |
-| `elasticsearch.data.podManagementPolicy` | Data pod creation strategy | `OrderedReady` |
-| `elasticsearch.data.updateStrategy` | Data node update strategy policy | `{type: "onDelete"}` |
-
+| `elasticsearch.clusterName` | This will be used as the Elasticsearch [cluster.name](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.name.html) and should be unique per cluster in the namespace | `elasticsearch` |
+| `elasticsearch.nodeGroup` | This is the name that will be used for each group of nodes in the cluster. The name will be `clusterName-nodeGroup-X` | `master` |
+| `elasticsearch.masterService` | Optional. The service name used to connect to the masters. You only need to set this if your master `nodeGroup` is set to something other than `master`. See [Clustering and Node Discovery](#clustering-and-node-discovery) for more information. | `` |
+| `elasticsearch.roles` | A hash map with the [specific roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html) for the node group | `master: true`<br>`data: true`<br>`ingest: true` |
+| `elasticsearch.replicas` | Kubernetes replica count for the statefulset (i.e. how many pods) | `3` |
+| `elasticsearch.minimumMasterNodes` | The value for [discovery.zen.minimum_master_nodes](https://www.elastic.co/guide/en/elasticsearch/reference/6.7/discovery-settings.html#minimum_master_nodes). Should be set to `(master_eligible_nodes / 2) + 1`. Ignored in Elasticsearch versions >= 7. | `2` |
+| `elasticsearch.esMajorVersion` | Used to set major version specific configuration. If you are using a custom image and not running the default Elasticsearch version you will need to set this to the version you are running (e.g. `esMajorVersion: 6`) | `""` |
+| `elasticsearch.esConfig` | Allows you to add any config files in `/usr/share/elasticsearch/config/` such as `elasticsearch.yml` and `log4j2.properties`. See [values.yaml](./values.yaml) for an example of the formatting. | `{}` |
+| `elasticsearch.extraEnvs` | Extra [environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config) which will be appended to the `env:` definition for the container | `[]` |
+| `elasticsearch.extraVolumes` | Templatable string of additional volumes to be passed to the `tpl` function | `""` |
+| `elasticsearch.extraVolumeMounts` | Templatable string of additional volumeMounts to be passed to the `tpl` function | `""` |
+| `elasticsearch.extraInitContainers` | Templatable string of additional init containers to be passed to the `tpl` function | `""` |
+| `elasticsearch.secretMounts` | Allows you easily mount a secret as a file inside the statefulset. Useful for mounting certificates and other secrets. See [values.yaml](./values.yaml) for an example | `[]` |
+| `elasticsearch.image` | The Elasticsearch docker image | `docker.elastic.co/elasticsearch/elasticsearch` |
+| `elasticsearch.imageTag` | The Elasticsearch docker image tag | `7.5.1` |
+| `elasticsearch.imagePullPolicy` | The Kubernetes [imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) value | `IfNotPresent` |
+| `elasticsearch.podAnnotations` | Configurable [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) applied to all Elasticsearch pods | `{}` |
+| `elasticsearch.labels` | Configurable [label](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) applied to all Elasticsearch pods | `{}` |
+| `elasticsearch.esJavaOpts` | [Java options](https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html) for Elasticsearch. This is where you should configure the [jvm heap size](https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html) | `-Xmx1g -Xms1g` |
+| `elasticsearch.resources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the statefulset | `requests.cpu: 100m`<br>`requests.memory: 2Gi`<br>`limits.cpu: 1000m`<br>`limits.memory: 2Gi` |
+| `elasticsearch.initResources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the initContainer in the statefulset | {} |
+| `elasticsearch.sidecarResources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the sidecar containers in the statefulset | {} |
+| `elasticsearch.networkHost` | Value for the [network.host Elasticsearch setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/network.host.html) | `0.0.0.0` |
+| `elasticsearch.volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for statefulsets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage). You will want to adjust the storage (default `30Gi`) and the `storageClassName` if you are using a different storage class | `accessModes: [ "ReadWriteOnce" ]`<br>`resources.requests.storage: 30Gi` |
+| `elasticsearch.persistence.annotations` | Additional persistence annotations for the `volumeClaimTemplate` | `{}` |
+| `elasticsearch.persistence.enabled` | Enables a persistent volume for Elasticsearch data. Can be disabled for nodes that only have [roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html) which don't require persistent data. | `true` |
+| `elasticsearch.priorityClassName` | The [name of the PriorityClass](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass). No default is supplied as the PriorityClass must be created first. | `""` |
+| `elasticsearch.antiAffinityTopologyKey` | The [anti-affinity topology key](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). By default this will prevent multiple Elasticsearch nodes from running on the same Kubernetes node | `kubernetes.io/hostname` |
+| `elasticsearch.antiAffinity` | Setting this to hard enforces the [anti-affinity rules](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). If it is set to soft it will be done "best effort". Other values will be ignored. | `hard` |
+| `elasticsearch.nodeAffinity` | Value for the [node affinity settings](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature) | `{}` |
+| `elasticsearch.podManagementPolicy` | By default Kubernetes [deploys statefulsets serially](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies). This deploys them in parallel so that they can discover eachother | `Parallel` |
+| `elasticsearch.protocol` | The protocol that will be used for the readinessProbe. Change this to `https` if you have `xpack.security.http.ssl.enabled` set | `http` |
+| `elasticsearch.httpPort` | The http port that Kubernetes will use for the healthchecks and the service. If you change this you will also need to set [http.port](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html#_settings) in `extraEnvs` | `9200` |
+| `elasticsearch.transportPort` | The transport port that Kubernetes will use for the service. If you change this you will also need to set [transport port configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-transport.html#_transport_settings) in `extraEnvs` | `9300` |
+| `elasticsearch.service.labels` | Labels to be added to non-headless service | `{}` |
+| `elasticsearch.service.labelsHeadless` | Labels to be added to headless service | `{}` |
+| `elasticsearch.service.type` | Type of elasticsearch service. [Service Types](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) | `ClusterIP` |
+| `elasticsearch.service.nodePort` | Custom [nodePort](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport) port that can be set if you are using `service.type: nodePort`. | `` |
+| `elasticsearch.service.annotations` | Annotations that Kubernetes will use for the service. This will configure load balancer if `service.type` is `LoadBalancer` [Annotations](https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws) | `{}` |
+| `elasticsearch.service.httpPortName` | The name of the http port within the service | `http` |
+| `elasticsearch.service.transportPortName` | The name of the transport port within the service | `transport` |
+| `elasticsearch.updateStrategy` | The [updateStrategy](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets) for the statefulset. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` |
+| `elasticsearch.maxUnavailable` | The [maxUnavailable](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget) value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` |
+| `elasticsearch.fsGroup (DEPRECATED)` | The Group ID (GID) for [securityContext.fsGroup](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) so that the Elasticsearch user can read from the persistent volume | `` |
+| `elasticsearch.podSecurityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) for the pod | `fsGroup: 1000`<br>`runAsUser: 1000` |
+| `elasticsearch.securityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container) for the container | `capabilities.drop:[ALL]`<br>`runAsNonRoot: true`<br>`runAsUser: 1000` |
+| `elasticsearch.terminationGracePeriod` | The [terminationGracePeriod](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) in seconds used when trying to stop the pod | `120` |
+| `elasticsearch.sysctlInitContainer.enabled` | Allows you to disable the sysctlInitContainer if you are setting vm.max_map_count with another method | `true` |
+| `elasticsearch.sysctlVmMaxMapCount` | Sets the [sysctl vm.max_map_count](https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html#vm-max-map-count) needed for Elasticsearch | `262144` |
+| `elasticsearch.readinessProbe` | Configuration fields for the [readinessProbe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) | `failureThreshold: 3`<br>`initialDelaySeconds: 10`<br>`periodSeconds: 10`<br>`successThreshold: 3`<br>`timeoutSeconds: 5` |
+| `elasticsearch.clusterHealthCheckParams` | The [Elasticsearch cluster health status params](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params) that will be used by readinessProbe command | `wait_for_status=green&timeout=1s` |
+| `elasticsearch.imagePullSecrets` | Configuration for [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) so that you can use a private registry for your image | `[]` |
+| `elasticsearch.nodeSelector` | Configurable [nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) so that you can target specific nodes for your Elasticsearch cluster | `{}` |
+| `elasticsearch.tolerations` | Configurable [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) | `[]` |
+| `elasticsearch.ingress` | Configurable [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) to expose the Elasticsearch service. See [`values.yaml`](./values.yaml) for an example | `enabled: false` |
+| `elasticsearch.schedulerName` | Name of the [alternate scheduler](https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#specify-schedulers-for-pods) | `nil` |
+| `elasticsearch.masterTerminationFix` | A workaround needed for Elasticsearch < 7.2 to prevent master status being lost during restarts [#63](https://github.com/elastic/helm-charts/issues/63) | `false` |
+| `elasticsearch.lifecycle` | Allows you to add lifecycle configuration. See [values.yaml](./values.yaml) for an example of the formatting. | `{}` |
+| `elasticsearch.keystore` | Allows you map Kubernetes secrets into the keystore. See the [config example](/elasticsearch/examples/config/values.yaml) and [how to use the keystore](#how-to-use-the-keystore) | `[]` |
+| `elasticsearch.rbac` | Configuration for creating a role, role binding and service account as part of this helm chart with `create: true`. Also can be used to reference an external service account with `serviceAccountName: "externalServiceAccountName"`. | `create: false`<br>`serviceAccountName: ""` |
+| `elasticsearch.podSecurityPolicy` | Configuration for create a pod security policy with minimal permissions to run this Helm chart with `create: true`. Also can be used to reference an external pod security policy with `name: "externalPodSecurityPolicy"` | `create: false`<br>`name: ""` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -187,15 +179,15 @@ ui:
## Must be provided if Ingress is enabled
##
hosts:
- - skywalking.domain.com
+ - skywalking
## Skywalking ui server Ingress TLS configuration
## Secrets must be manually created in the namespace
##
tls:
- - secretName: skywalking-tls
+ - secretName: skywalking
hosts:
- - skywalking.domain.com
+ - skywalking
```
### Envoy ALS
diff --git a/chart/skywalking/templates/NOTES.txt b/chart/skywalking/templates/NOTES.txt
index a8bb2a6..38bdc2f 100644
--- a/chart/skywalking/templates/NOTES.txt
+++ b/chart/skywalking/templates/NOTES.txt
@@ -48,7 +48,7 @@ Get the UI URL by running these commands:
{{- end }}
{{- if.Values.elasticsearch.enabled }}
-{{- if and .Values.elasticsearch.master.persistence.enabled .Values.elasticsearch.data.persistence.enabled }}
+{{- if .Values.elasticsearch.persistence.enabled }}
{{- else }}
#################################################################################
###### WARNING: Persistence is disabled!!! You will lose your data when #####
diff --git a/chart/skywalking/templates/_helpers.tpl b/chart/skywalking/templates/_helpers.tpl
index b17f2c6..d7009b4 100644
--- a/chart/skywalking/templates/_helpers.tpl
+++ b/chart/skywalking/templates/_helpers.tpl
@@ -63,19 +63,12 @@ Create the name of the service account to use for the oap cluster
{{ default (include "skywalking.oap.fullname" .) .Values.serviceAccounts.oap }}
{{- end -}}
-{{- define "call-nested" }}
-{{- $dot := index . 0 }}
-{{- $subchart := index . 1 }}
-{{- $template := index . 2 }}
-{{- include $template (dict "Chart" (dict "Name" $subchart) "Values" (index $dot.Values $subchart) "Release" $dot.Release "Capabilities" $dot.Capabilities) }}
-{{- end }}
-
{{- define "skywalking.containers.wait-for-es" -}}
- name: wait-for-elasticsearch
image: busybox:1.30
imagePullPolicy: IfNotPresent
{{- if .Values.elasticsearch.enabled }}
- command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 {{ include "call-nested" (list . "elasticsearch" "elasticsearch.client.fullname") }} 9200 && exit 0 || sleep 5; done; exit 1']
+ command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 {{ .Values.elasticsearch.clusterName }}-{{ .Values.elasticsearch.nodeGroup }} {{ .Values.elasticsearch.httpPort }} && exit 0 || sleep 5; done; exit 1']
{{- else }}
command: ['sh', '-c', 'for i in $(seq 1 60); do nc -z -w3 {{ .Values.elasticsearch.config.host }} {{ .Values.elasticsearch.config.port.http }} && exit 0 || sleep 5; done; exit 1']
{{- end }}
diff --git a/chart/skywalking/templates/es-init.job.yaml b/chart/skywalking/templates/es-init.job.yaml
index 5bc8640..c415902 100644
--- a/chart/skywalking/templates/es-init.job.yaml
+++ b/chart/skywalking/templates/es-init.job.yaml
@@ -51,7 +51,7 @@ spec:
value: elasticsearch
- name: SW_STORAGE_ES_CLUSTER_NODES
{{- if .Values.elasticsearch.enabled }}
- value: "{{ include "call-nested" (list . "elasticsearch" "elasticsearch.client.fullname") }}:9200"
-{{- else }}
+ value: "{{ .Values.elasticsearch.clusterName }}-{{ .Values.elasticsearch.nodeGroup }}:{{ .Values.elasticsearch.httpPort }}"
+ {{- else }}
value: "{{ .Values.elasticsearch.config.host }}:{{ .Values.elasticsearch.config.port.http }}"
{{- end }}
\ No newline at end of file
diff --git a/chart/skywalking/templates/oap-deployment.yaml b/chart/skywalking/templates/oap-deployment.yaml
index 3d746ca..a82f3e6 100644
--- a/chart/skywalking/templates/oap-deployment.yaml
+++ b/chart/skywalking/templates/oap-deployment.yaml
@@ -122,7 +122,7 @@ spec:
{{- end }}
- name: SW_STORAGE_ES_CLUSTER_NODES
{{- if .Values.elasticsearch.enabled }}
- value: "{{ include "call-nested" (list . "elasticsearch" "elasticsearch.client.fullname") }}:9200"
+ value: "{{ .Values.elasticsearch.clusterName }}-{{ .Values.elasticsearch.nodeGroup }}:{{ .Values.elasticsearch.httpPort }}"
{{- else }}
value: "{{ .Values.elasticsearch.config.host }}:{{ .Values.elasticsearch.config.port.http }}"
{{- end }}
diff --git a/chart/skywalking/values-es6.yaml b/chart/skywalking/values-es6.yaml
new file mode 100644
index 0000000..15467f8
--- /dev/null
+++ b/chart/skywalking/values-es6.yaml
@@ -0,0 +1,348 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Default values for skywalking.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+serviceAccounts:
+ oap:
+
+oap:
+ name: skywalking-oap
+ image:
+ repository: apache/skywalking-oap-server
+ tag: 6.6.0-es6
+ pullPolicy: IfNotPresent
+ ports:
+ grpc: 11800
+ rest: 12800
+ replicas: 2
+ service:
+ type: ClusterIP
+ javaOpts: -Xmx2g -Xms2g
+ antiAffinity: "soft"
+ nodeAffinity: {}
+ nodeSelector: {}
+ tolerations: []
+ resources: {}
+ # limits:
+ # cpu: 8
+ # memory: 8Gi
+ # requests:
+ # cpu: 8
+ # memory: 4Gi
+ # podAnnotations:
+ # example: oap-foo
+ envoy:
+ als:
+ enabled: false
+ # more envoy ALS ,please refer to https://github.com/apache/skywalking/blob/master/docs/en/setup/envoy/als_setting.md#observe-service-mesh-through-als
+ istio:
+ adapter:
+ enabled: false
+ env:
+ # more env, please refer to https://hub.docker.com/r/apache/skywalking-oap-server
+ # or https://github.com/apache/skywalking-docker/blob/master/6/6.4/oap/README.md#sw_telemetry
+ui:
+ name: skywalking-ui
+ replicas: 1
+ image:
+ repository: apache/skywalking-ui
+ tag: 6.6.0
+ pullPolicy: IfNotPresent
+ # podAnnotations:
+ # example: oap-foo
+ ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ path: /
+ hosts: []
+ # - skywalking.local
+ tls: []
+ # - secretName: skywalking-tls
+ # hosts:
+ # - skywalking.local
+ service:
+ type: ClusterIP
+ # clusterIP: None
+ externalPort: 80
+ internalPort: 8080
+ ## External IP addresses of service
+ ## Default: nil
+ ##
+ # externalIPs:
+ # - 192.168.0.1
+ #
+ ## LoadBalancer IP if service.type is LoadBalancer
+ ## Default: nil
+ ##
+ # loadBalancerIP: 10.2.2.2
+ # Annotation example: setup ssl with aws cert when service.type is LoadBalancer
+ # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT
+ annotations: {}
+ ## Limit load balancer source ips to list of CIDRs (where available)
+ # loadBalancerSourceRanges: []
+
+elasticsearch:
+ enabled: true
+# config:
+# port:
+# http: 9200
+# host: elasticsearch # es service on kubernetes or host
+ clusterName: "elasticsearch"
+ nodeGroup: "master"
+
+ # The service that non master groups will try to connect to when joining the cluster
+ # This should be set to clusterName + "-" + nodeGroup for your master group
+ masterService: ""
+
+ # Elasticsearch roles that will be applied to this nodeGroup
+ # These will be set as environment variables. E.g. node.master=true
+ roles:
+ master: "true"
+ ingest: "true"
+ data: "true"
+
+ replicas: 3
+ minimumMasterNodes: 2
+
+ esMajorVersion: ""
+
+ # Allows you to add any config files in /usr/share/elasticsearch/config/
+ # such as elasticsearch.yml and log4j2.properties
+ esConfig: {}
+ # elasticsearch.yml: |
+ # key:
+ # nestedkey: value
+ # log4j2.properties: |
+ # key = value
+
+ # Extra environment variables to append to this nodeGroup
+ # This will be appended to the current 'env:' key. You can use any of the kubernetes env
+ # syntax here
+ extraEnvs: []
+ # - name: MY_ENVIRONMENT_VAR
+ # value: the_value_goes_here
+
+ # A list of secrets and their paths to mount inside the pod
+ # This is useful for mounting certificates for security and for mounting
+ # the X-Pack license
+ secretMounts: []
+ # - name: elastic-certificates
+ # secretName: elastic-certificates
+ # path: /usr/share/elasticsearch/config/certs
+
+ image: "docker.elastic.co/elasticsearch/elasticsearch"
+ imageTag: "6.8.6"
+ imagePullPolicy: "IfNotPresent"
+
+ podAnnotations: {}
+ # iam.amazonaws.com/role: es-cluster
+
+ # additionals labels
+ labels: {}
+
+ esJavaOpts: "-Xmx1g -Xms1g"
+
+ resources:
+ requests:
+ cpu: "100m"
+ memory: "2Gi"
+ limits:
+ cpu: "1000m"
+ memory: "2Gi"
+
+ initResources: {}
+ # limits:
+ # cpu: "25m"
+ # # memory: "128Mi"
+ # requests:
+ # cpu: "25m"
+ # memory: "128Mi"
+
+ sidecarResources: {}
+ # limits:
+ # cpu: "25m"
+ # # memory: "128Mi"
+ # requests:
+ # cpu: "25m"
+ # memory: "128Mi"
+
+ networkHost: "0.0.0.0"
+
+ volumeClaimTemplate:
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 30Gi
+
+ rbac:
+ create: false
+ serviceAccountName: ""
+
+ podSecurityPolicy:
+ create: false
+ name: ""
+ spec:
+ privileged: true
+ fsGroup:
+ rule: RunAsAny
+ runAsUser:
+ rule: RunAsAny
+ seLinux:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+ volumes:
+ - secret
+ - configMap
+ - persistentVolumeClaim
+
+ persistence:
+ enabled: false
+ annotations: {}
+
+ extraVolumes: ""
+ # - name: extras
+ # emptyDir: {}
+
+ extraVolumeMounts: ""
+ # - name: extras
+ # mountPath: /usr/share/extras
+ # readOnly: true
+
+ extraInitContainers: ""
+ # - name: do-something
+ # image: busybox
+ # command: ['do', 'something']
+
+ # This is the PriorityClass settings as defined in
+ # https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
+ priorityClassName: ""
+
+ # By default this will make sure two pods don't end up on the same node
+ # Changing this to a region would allow you to spread pods across regions
+ antiAffinityTopologyKey: "kubernetes.io/hostname"
+
+ # Hard means that by default pods will only be scheduled if there are enough nodes for them
+ # and that they will never end up on the same node. Setting this to soft will do this "best effort"
+ antiAffinity: "hard"
+
+ # This is the node affinity settings as defined in
+ # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
+ nodeAffinity: {}
+
+ # The default is to deploy all pods serially. By setting this to parallel all pods are started at
+ # the same time when bootstrapping the cluster
+ podManagementPolicy: "Parallel"
+
+ protocol: http
+ httpPort: 9200
+ transportPort: 9300
+
+ service:
+ labels: {}
+ labelsHeadless: {}
+ type: ClusterIP
+ nodePort: ""
+ annotations: {}
+ httpPortName: http
+ transportPortName: transport
+
+ updateStrategy: RollingUpdate
+
+ # This is the max unavailable setting for the pod disruption budget
+ # The default value of 1 will make sure that kubernetes won't allow more than 1
+ # of your pods to be unavailable during maintenance
+ maxUnavailable: 1
+
+ podSecurityContext:
+ fsGroup: 1000
+ runAsUser: 1000
+
+ # The following value is deprecated,
+ # please use the above podSecurityContext.fsGroup instead
+ fsGroup: ""
+
+ securityContext:
+ capabilities:
+ drop:
+ - ALL
+ # readOnlyRootFilesystem: true
+ runAsNonRoot: true
+ runAsUser: 1000
+
+ # How long to wait for elasticsearch to stop gracefully
+ terminationGracePeriod: 120
+
+ sysctlVmMaxMapCount: 262144
+
+ readinessProbe:
+ failureThreshold: 3
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ successThreshold: 3
+ timeoutSeconds: 5
+
+ # https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status
+ clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
+
+ ## Use an alternate scheduler.
+ ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
+ ##
+ schedulerName: ""
+
+ imagePullSecrets: []
+ nodeSelector: {}
+ tolerations: []
+
+ # Enabling this will publically expose your Elasticsearch instance.
+ # Only enable this if you have security enabled on your cluster
+ ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ path: /
+ hosts:
+ - chart-example.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
+
+ nameOverride: ""
+ fullnameOverride: ""
+
+ # https://github.com/elastic/helm-charts/issues/63
+ masterTerminationFix: false
+
+ lifecycle: {}
+ # preStop:
+ # exec:
+ # command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
+ # postStart:
+ # exec:
+ # command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
+
+ sysctlInitContainer:
+ enabled: true
+
+ keystore: []
+
+nameOverride: ""
diff --git a/chart/skywalking/values.yaml b/chart/skywalking/values.yaml
index 8736fd2..2d3a38a 100644
--- a/chart/skywalking/values.yaml
+++ b/chart/skywalking/values.yaml
@@ -24,7 +24,7 @@ oap:
name: skywalking-oap
image:
repository: apache/skywalking-oap-server
- tag: 6.5.0
+ tag: 6.6.0-es7
pullPolicy: IfNotPresent
ports:
grpc: 11800
@@ -61,7 +61,7 @@ ui:
replicas: 1
image:
repository: apache/skywalking-ui
- tag: 6.5.0
+ tag: 6.6.0
pullPolicy: IfNotPresent
# podAnnotations:
# example: oap-foo
@@ -100,175 +100,249 @@ ui:
elasticsearch:
enabled: true
- # If elasticsearch.enabled=true values for elasticsearch.
- # or elasticsearch.enabled=false, Will not deploy ES pod. The following config options must be configured
# config:
# port:
# http: 9200
# host: elasticsearch # es service on kubernetes or host
+ clusterName: "elasticsearch"
+ nodeGroup: "master"
- ## Define serviceAccount names for components. Defaults to component's fully qualified name.
- serviceAccounts:
- client:
- create: true
- name:
- master:
- create: true
- name:
- data:
- create: true
- name:
-
- client:
- name: client
- replicas: 2
- serviceType: ClusterIP
- ## If coupled with serviceType = "NodePort", this will set a specific nodePort to the client HTTP port
- # httpNodePort: 30920
- loadBalancerIP: {}
- loadBalancerSourceRanges: {}
- ## (dict) If specified, apply these annotations to the client service
- # serviceAnnotations:
- # example: client-svc-foo
- heapSize: "512m"
- # additionalJavaOpts: "-XX:MaxRAM=512m"
- antiAffinity: "soft"
- nodeAffinity: {}
- nodeSelector: {}
- tolerations: []
- initResources: {}
- # limits:
- # cpu: "25m"
- # # memory: "128Mi"
- # requests:
- # cpu: "25m"
- # memory: "128Mi"
- resources:
- limits:
- cpu: "1"
- # memory: "1024Mi"
- requests:
- cpu: "25m"
- memory: "512Mi"
- priorityClassName: ""
- ## (dict) If specified, apply these annotations to each client Pod
- # podAnnotations:
- # example: client-foo
- podDisruptionBudget:
- enabled: false
- minAvailable: 1
- # maxUnavailable: 1
- ingress:
- enabled: false
- # user: NAME
- # password: PASSWORD
- annotations: {}
- # kubernetes.io/ingress.class: nginx
- # kubernetes.io/tls-acme: "true"
- path: /
- hosts:
- - chart-example.local
- tls: []
- # - secretName: chart-example-tls
- # hosts:
- # - chart-example.local
-
- master:
- name: master
- exposeHttp: false
- replicas: 3
- heapSize: "512m"
- # additionalJavaOpts: "-XX:MaxRAM=512m"
- persistence:
- enabled: false
- accessMode: ReadWriteOnce
- name: data
- size: "4Gi"
- # storageClass: "ssd"
- readinessProbe:
- httpGet:
- path: /_cluster/health?local=true
- port: 9200
- initialDelaySeconds: 5
- antiAffinity: "soft"
- nodeAffinity: {}
- nodeSelector: {}
- tolerations: []
- initResources: {}
- # limits:
- # cpu: "25m"
- # # memory: "128Mi"
- # requests:
- # cpu: "25m"
- # memory: "128Mi"
- resources:
- limits:
- cpu: "1"
- # memory: "1024Mi"
- requests:
- cpu: "25m"
- memory: "512Mi"
- priorityClassName: ""
- ## (dict) If specified, apply these annotations to each master Pod
- # podAnnotations:
- # example: master-foo
- podManagementPolicy: OrderedReady
- podDisruptionBudget:
- enabled: false
- minAvailable: 2 # Same as `cluster.env.MINIMUM_MASTER_NODES`
- # maxUnavailable: 1
- updateStrategy:
- type: OnDelete
-
- data:
- name: data
- exposeHttp: false
- replicas: 2
- heapSize: "1536m"
- # additionalJavaOpts: "-XX:MaxRAM=1536m"
- persistence:
- enabled: false
- accessMode: ReadWriteOnce
- name: data
- size: "30Gi"
- # storageClass: "ssd"
- readinessProbe:
- httpGet:
- path: /_cluster/health?local=true
- port: 9200
- initialDelaySeconds: 5
- terminationGracePeriodSeconds: 3600
- antiAffinity: "soft"
- nodeAffinity: {}
- nodeSelector: {}
- tolerations: []
- initResources: {}
- # limits:
- # cpu: "25m"
- # # memory: "128Mi"
- # requests:
- # cpu: "25m"
- # memory: "128Mi"
+ # The service that non master groups will try to connect to when joining the cluster
+ # This should be set to clusterName + "-" + nodeGroup for your master group
+ masterService: ""
+
+ # Elasticsearch roles that will be applied to this nodeGroup
+ # These will be set as environment variables. E.g. node.master=true
+ roles:
+ master: "true"
+ ingest: "true"
+ data: "true"
+
+ replicas: 3
+ minimumMasterNodes: 2
+
+ esMajorVersion: ""
+
+ # Allows you to add any config files in /usr/share/elasticsearch/config/
+ # such as elasticsearch.yml and log4j2.properties
+ esConfig: {}
+ # elasticsearch.yml: |
+ # key:
+ # nestedkey: value
+ # log4j2.properties: |
+ # key = value
+
+ # Extra environment variables to append to this nodeGroup
+ # This will be appended to the current 'env:' key. You can use any of the kubernetes env
+ # syntax here
+ extraEnvs: []
+ # - name: MY_ENVIRONMENT_VAR
+ # value: the_value_goes_here
+
+ # A list of secrets and their paths to mount inside the pod
+ # This is useful for mounting certificates for security and for mounting
+ # the X-Pack license
+ secretMounts: []
+ # - name: elastic-certificates
+ # secretName: elastic-certificates
+ # path: /usr/share/elasticsearch/config/certs
+
+ image: "docker.elastic.co/elasticsearch/elasticsearch"
+ imageTag: "7.5.1"
+ imagePullPolicy: "IfNotPresent"
+
+ podAnnotations: {}
+ # iam.amazonaws.com/role: es-cluster
+
+ # additionals labels
+ labels: {}
+
+ esJavaOpts: "-Xmx1g -Xms1g"
+
+ resources:
+ requests:
+ cpu: "100m"
+ memory: "2Gi"
+ limits:
+ cpu: "1000m"
+ memory: "2Gi"
+
+ initResources: {}
+ # limits:
+ # cpu: "25m"
+ # # memory: "128Mi"
+ # requests:
+ # cpu: "25m"
+ # memory: "128Mi"
+
+ sidecarResources: {}
+ # limits:
+ # cpu: "25m"
+ # # memory: "128Mi"
+ # requests:
+ # cpu: "25m"
+ # memory: "128Mi"
+
+ networkHost: "0.0.0.0"
+
+ volumeClaimTemplate:
+ accessModes: [ "ReadWriteOnce" ]
resources:
- limits:
- cpu: "1"
- # memory: "2048Mi"
requests:
- cpu: "25m"
- memory: "1536Mi"
- priorityClassName: ""
- ## (dict) If specified, apply these annotations to each data Pod
- # podAnnotations:
- # example: data-foo
- podDisruptionBudget:
- enabled: false
- # minAvailable: 1
- maxUnavailable: 1
- podManagementPolicy: OrderedReady
- updateStrategy:
- type: OnDelete
- hooks: # post-start and pre-stop hooks
- drain: # drain the node before stopping it and re-integrate it into the cluster after start
- enabled: true
+ storage: 30Gi
+
+ rbac:
+ create: false
+ serviceAccountName: ""
+
+ podSecurityPolicy:
+ create: false
+ name: ""
+ spec:
+ privileged: true
+ fsGroup:
+ rule: RunAsAny
+ runAsUser:
+ rule: RunAsAny
+ seLinux:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+ volumes:
+ - secret
+ - configMap
+ - persistentVolumeClaim
+
+ persistence:
+ enabled: false
+ annotations: {}
+
+ extraVolumes: ""
+ # - name: extras
+ # emptyDir: {}
+
+ extraVolumeMounts: ""
+ # - name: extras
+ # mountPath: /usr/share/extras
+ # readOnly: true
+
+ extraInitContainers: ""
+ # - name: do-something
+ # image: busybox
+ # command: ['do', 'something']
+
+ # This is the PriorityClass settings as defined in
+ # https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
+ priorityClassName: ""
+
+ # By default this will make sure two pods don't end up on the same node
+ # Changing this to a region would allow you to spread pods across regions
+ antiAffinityTopologyKey: "kubernetes.io/hostname"
+
+ # Hard means that by default pods will only be scheduled if there are enough nodes for them
+ # and that they will never end up on the same node. Setting this to soft will do this "best effort"
+ antiAffinity: "hard"
+
+ # This is the node affinity settings as defined in
+ # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
+ nodeAffinity: {}
+
+ # The default is to deploy all pods serially. By setting this to parallel all pods are started at
+ # the same time when bootstrapping the cluster
+ podManagementPolicy: "Parallel"
+
+ protocol: http
+ httpPort: 9200
+ transportPort: 9300
+
+ service:
+ labels: {}
+ labelsHeadless: {}
+ type: ClusterIP
+ nodePort: ""
+ annotations: {}
+ httpPortName: http
+ transportPortName: transport
+
+ updateStrategy: RollingUpdate
+
+ # This is the max unavailable setting for the pod disruption budget
+ # The default value of 1 will make sure that kubernetes won't allow more than 1
+ # of your pods to be unavailable during maintenance
+ maxUnavailable: 1
+
+ podSecurityContext:
+ fsGroup: 1000
+ runAsUser: 1000
+
+ # The following value is deprecated,
+ # please use the above podSecurityContext.fsGroup instead
+ fsGroup: ""
+
+ securityContext:
+ capabilities:
+ drop:
+ - ALL
+ # readOnlyRootFilesystem: true
+ runAsNonRoot: true
+ runAsUser: 1000
+
+ # How long to wait for elasticsearch to stop gracefully
+ terminationGracePeriod: 120
+
+ sysctlVmMaxMapCount: 262144
+
+ readinessProbe:
+ failureThreshold: 3
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ successThreshold: 3
+ timeoutSeconds: 5
+
+ # https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status
+ clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
+
+ ## Use an alternate scheduler.
+ ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
+ ##
+ schedulerName: ""
+
+ imagePullSecrets: []
+ nodeSelector: {}
+ tolerations: []
+
+ # Enabling this will publically expose your Elasticsearch instance.
+ # Only enable this if you have security enabled on your cluster
+ ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ path: /
+ hosts:
+ - chart-example.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
+
+ nameOverride: ""
+ fullnameOverride: ""
+
+ # https://github.com/elastic/helm-charts/issues/63
+ masterTerminationFix: false
+
+ lifecycle: {}
+ # preStop:
+ # exec:
+ # command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
+ # postStart:
+ # exec:
+ # command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
+
+ sysctlInitContainer:
+ enabled: true
+
+ keystore: []
nameOverride: ""