You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@inlong.apache.org by do...@apache.org on 2022/04/12 12:51:22 UTC

[incubator-inlong] branch master updated: [INLONG-3489][K8s] Improve documentation in helm chart (#3646)

This is an automated email from the ASF dual-hosted git repository.

dockerzhang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-inlong.git


The following commit(s) were added to refs/heads/master by this push:
     new f65bcf307 [INLONG-3489][K8s] Improve documentation in helm chart (#3646)
f65bcf307 is described below

commit f65bcf3075f8e899ab068e6014bf448ec195926f
Author: Yuanhao Ji <ji...@apache.org>
AuthorDate: Tue Apr 12 20:51:18 2022 +0800

    [INLONG-3489][K8s] Improve documentation in helm chart (#3646)
---
 docker/kubernetes/README.md           | 132 ++++++++++++++++++++++--
 docker/kubernetes/templates/NOTES.txt | 185 ++++++++++++++++++++++++++++++----
 docker/kubernetes/values.yaml         |  59 ++++++++---
 3 files changed, 331 insertions(+), 45 deletions(-)

diff --git a/docker/kubernetes/README.md b/docker/kubernetes/README.md
index d305e4734..136bb0bbe 100644
--- a/docker/kubernetes/README.md
+++ b/docker/kubernetes/README.md
@@ -1,21 +1,65 @@
-## The Helm Chart for Apache InLong
+# The Helm Chart for Apache InLong
 
-### Prerequisites
+## Prerequisites
 
 - Kubernetes 1.10+
 - Helm 3.0+
 - A dynamic provisioner for the PersistentVolumes(`production environment`)
 
-### Usage
+## Usage
 
-#### Install
+### Install
+
+If the namespace named `inlong` does not exist, create it first by running:
 
 ```shell
 kubectl create namespace inlong
+```
+
+To install the chart with a namespace named `inlong`, try:
+
+```shell
 helm upgrade inlong --install -n inlong ./
 ```
 
-#### Configuration
+### Access InLong Dashboard
+
+If `ingress.enabled` in [values.yaml](values.yaml) is set to `true`, you just access `http://${ingress.host}/dashboard` in browser.
+
+Otherwise, when `dashboard.service.type` is set to `ClusterIP`, you need to execute the port-forward command like:
+
+```shell
+export DASHBOARD_POD_NAME=$(kubectl get pods -l "app.kubernetes.io/name=inlong-dashboard,app.kubernetes.io/instance=inlong" -o jsonpath="{.items[0].metadata.name}" -n inlong)
+export DASHBOARD_CONTAINER_PORT=$(kubectl get pod $DASHBOARD_POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}" -n inlong)
+kubectl port-forward $DASHBOARD_POD_NAME 8181:$DASHBOARD_CONTAINER_PORT -n inlong
+```
+
+And then access [http://127.0.0.1:8181](http://127.0.0.1:8181)
+
+> Tip: If the error of `unable to do port forwarding: socat not found` appears, you need to install `socat` at first.
+
+Or when `dashboard.service.type` is set to `NodePort`, you need to execute the following commands:
+
+```shell
+export DASHBOARD_NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}" -n inlong)
+export DASHBOARD_NODE_PORT=$(kubectl get svc inlong-dashboard -o jsonpath="{.spec.ports[0].nodePort}" -n inlong)
+```
+
+And then access `http://$DASHBOARD_NODE_IP:$DASHBOARD_NODE_PORT`
+
+When `dashboard.service.type` is set to `LoadBalancer`, you need to execute the following command:
+
+```shell
+export DASHBOARD_SERVICE_IP=$(kubectl get svc inlong-dashboard --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}"  -n inlong)
+```
+
+And then access `http://$DASHBOARD_SERVICE_IP:30080`
+
+> NOTE: It may take a few minutes for the `LoadBalancer` IP to be available. You can check the status by running `kubectl get svc inlong-dashboard -n inlong -w`
+
+The default username is `admin` and the default password is `inlong`. You can access the InLong Dashboard through them.
+
+### Configuration
 
 The configuration file is [values.yaml](values.yaml), and the following tables lists the configurable parameters of InLong and their default values.
 
@@ -26,7 +70,7 @@ The configuration file is [values.yaml](values.yaml), and the following tables l
 |                         `images.<component>.repository`                          |                  |                                                          Docker image repository for the component                                                           |
 |                             `images.<component>.tag`                             |     `latest`     |                                                              Docker image tag for the component                                                              |
 |                             `<component>.component`                              |                  |                                                                        Component name                                                                        |
-|                            `<component>.replicaCount`                            |       `1`        |                                                Replicas is the desired number of replicas of a given Template                                                |
+|                              `<component>.replicas`                              |       `1`        |                                                Replicas is the desired number of replicas of a given Template                                                |
 |                        `<component>.podManagementPolicy`                         |  `OrderedReady`  |                PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down                 |
 |                            `<component>.annotations`                             |       `{}`       |                                 The `annotations` field can be used to attach arbitrary non-identifying metadata to objects                                  |
 |                            `<component>.tolerations`                             |       `[]`       |                     Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints                     |
@@ -58,16 +102,86 @@ The configuration file is [values.yaml](values.yaml), and the following tables l
 |                           `external.pulsar.serviceUrl`                           | `localhost:6650` |                                                                 External Pulsar service URL                                                                  |
 |                            `external.pulsar.adminUrl`                            | `localhost:8080` |                                                                  External Pulsar admin URL                                                                   |
 
-> The components include `agent`, `audit`, `dashboard`, `dataproxy`, `manager`, `tubemq-manager`, `tubemq-master`, `tubemq-broker`, `zookeeper` and `mysql`.
+> The optional components include `agent`, `audit`, `dashboard`, `dataproxy`, `manager`, `tubemq-manager`, `tubemq-master`, `tubemq-broker`, `zookeeper` and `mysql`.
+
+### Uninstall
 
-#### Uninstall
+To uninstall the release, try:
 
 ```shell
 helm uninstall inlong -n inlong
 ```
 
-You can delete all `PVC ` if any persistent volume claims used, it will lose all data.
+The above command removes all the Kubernetes components except the `PVC` associated with the chart, and deletes the release.
+You can delete all `PVC` if any persistent volume claims used, it will lose all data.
 
 ```shell
 kubectl delete pvc -n inlong --all
 ```
+
+> Note: Deleting the PVC also delete all data. Please be cautious before doing it.
+
+## Development
+
+A Kubernetes cluster with [helm](https://helm.sh) is required before development.
+But it doesn't matter if you don't have one, the [kind](https://github.com/kubernetes-sigs/kind) is recommended.
+It runs a local Kubernetes cluster in Docker container. Therefore, it requires very little time to up and stop the Kubernetes node.
+
+### Quick start with kind
+
+You can install kind by following the [Quick Start](https://kind.sigs.k8s.io/docs/user/quick-start) section of their official documentation.
+
+After installing kind, you can create a Kubernetes cluster with the [configuration file](../../.github/kind.yml), try:
+
+```shell
+kind create cluster --config ../../.github/kind.yml
+```
+
+To specify another image use the `--image` flag – `kind create cluster --image=....`.
+Using a different image allows you to change the Kubernetes version of the created cluster.
+To find images suitable for a given release currently you should check the [release notes](https://github.com/kubernetes-sigs/kind/releases) for your given kind version (check with `kind version`) where you'll find a complete listing of images created for a kind release.
+
+After installing kind, you can interact with the created cluster, try:
+
+```shell
+kubectl cluster-info --context kind-inlong-cluster
+```
+
+Now, you have a running Kubernetes cluster for local development.
+
+### Install Helm
+
+Please follow the [installation guide](https://helm.sh/docs/intro/install) in the official documentation to install Helm.
+
+### Install the chart
+
+To create the namespace and Install the chart, try:
+
+```shell
+kubectl create namespace inlong
+helm upgrade inlong --install -n inlong ./
+```
+
+It may take a few minutes. Confirm the pods are up:
+
+```shell
+watch kubectl get pods -n inlong -o wide
+```
+
+### Develop and debug
+
+Follow the [template debugging guide](https://helm.sh/docs/chart_template_guide/debugging) in the official documentation to debug your chart.
+
+Besides, you can save the rendered templates by:
+
+```shell
+helm template ./ --output-dir ./result
+```
+
+Then, you can check the rendered templates in the `result` directory.
+
+## Troubleshooting
+
+We've done our best to make these charts as seamless as possible, but occasionally there are circumstances beyond our control.
+We've collected tips and tricks for troubleshooting common issues.
+Please examine these first before raising an [issue](https://github.com/apache/incubator-inlong/issues/new/choose), and feel free to make a [Pull Request](https://github.com/apache/incubator-inlong/compare)!
diff --git a/docker/kubernetes/templates/NOTES.txt b/docker/kubernetes/templates/NOTES.txt
index 37b4f95fe..bbbe0b08c 100644
--- a/docker/kubernetes/templates/NOTES.txt
+++ b/docker/kubernetes/templates/NOTES.txt
@@ -1,3 +1,4 @@
+{{/*
 #
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -14,26 +15,166 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
+*/}}
 
-1. Get the application URL by running these commands:
-{{/*{{- if .Values.ingress.enabled }}*/}}
-{{/*{{- range $host := .Values.ingress.hosts }}*/}}
-{{/*  {{- range .paths }}*/}}
-{{/*  http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}*/}}
-{{/*  {{- end }}*/}}
-{{/*{{- end }}*/}}
-{{/*{{- else if contains "NodePort" .Values.service.type }}*/}}
-{{/*  export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "inlong.fullname" . }})*/}}
-{{/*  export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")*/}}
-{{/*  echo http://$NODE_IP:$NODE_PORT*/}}
-{{/*{{- else if contains "LoadBalancer" .Values.service.type }}*/}}
-{{/*     NOTE: It may take a few minutes for the LoadBalancer IP to be available.*/}}
-{{/*           You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "inlong.fullname" . }}'*/}}
-{{/*  export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "inlong.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")*/}}
-{{/*  echo http://$SERVICE_IP:{{ .Values.service.port }}*/}}
-{{/*{{- else if contains "ClusterIP" .Values.service.type }}*/}}
-{{/*  export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "inlong.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")*/}}
-{{/*  export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")*/}}
-{{/*  echo "Visit http://127.0.0.1:8080 to use your application"*/}}
-{{/*  kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT*/}}
-{{/*{{- end }}*/}}
+** Thank you for installing {{ .Chart.Name }}. Please be patient while the chart {{ .Chart.Name }}-{{ .Chart.AppVersion }} is being deployed. **
+
+1. Access InLong Dashboard by running these commands:
+
+{{- if .Values.ingress.enabled }}
+
+    InLong Dashboard URL: http{{ if .Values.ingress.tls.enabled }}s{{ end }}://{{ .Values.ingress.host }}/{{ .Values.ingress.path }}
+
+{{- else if eq .Values.dashboard.service.type "ClusterIP" }}
+
+    $ export DASHBOARD_POD_NAME=$(sudo kubectl get pods -l "app.kubernetes.io/name={{ template "inlong.name" . }}-{{ .Values.dashboard.component }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}" -n {{ .Release.Namespace }})
+    $ export DASHBOARD_CONTAINER_PORT=$(sudo kubectl get pod $DASHBOARD_POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}" -n {{ .Release.Namespace }})
+    $ sudo kubectl port-forward $DASHBOARD_POD_NAME 8181:$DASHBOARD_CONTAINER_PORT -n {{ .Release.Namespace }}
+    $ echo "InLong Dashboard URL: http://127.0.0.1:8181"
+
+{{- else if eq .Values.dashboard.service.type "NodePort" }}
+
+    $ export DASHBOARD_NODE_IP=$(sudo kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}" -n {{ .Release.Namespace }})
+    $ export DASHBOARD_NODE_PORT=$(sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.dashboard.component }} -o jsonpath="{.spec.ports[0].nodePort}" -n {{ .Release.Namespace }})
+    $ echo "InLong Dashboard URL: http://$DASHBOARD_NODE_IP:$DASHBOARD_NODE_PORT"
+
+{{- else if eq .Values.dashboard.service.type "LoadBalancer" }}
+
+    NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+          You can check the status by running 'sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.dashboard.component }} -n {{ .Release.Namespace }} -w'
+
+    $ export DASHBOARD_SERVICE_IP=$(sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.dashboard.component }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}"  -n {{ .Release.Namespace }})
+    $ echo "http://$DASHBOARD_SERVICE_IP:{{ .Values.dashboard.service.nodePort }}"
+
+{{- end }}
+
+2. Access InLong Manager by running these commands:
+
+{{- if .Values.ingress.enabled }}
+
+    InLong Manager URL: http{{ if .Values.ingress.tls.enabled }}s{{ end }}://{{ .Values.ingress.host }}/{{ .Values.ingress.path }}
+
+{{- else if eq .Values.manager.service.type "ClusterIP" }}
+
+    $ export MANAGER_POD_NAME=$(sudo kubectl get pods -l "app.kubernetes.io/name={{ template "inlong.name" . }}-{{ .Values.manager.component }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}" -n {{ .Release.Namespace }})
+    $ export MANAGER_CONTAINER_PORT=$(sudo kubectl get pod $MANAGER_POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}" -n {{ .Release.Namespace }})
+    $ sudo kubectl port-forward $MANAGER_POD_NAME 8182:$MANAGER_CONTAINER_PORT -n {{ .Release.Namespace }}
+    $ echo "InLong Manager URL: http://127.0.0.1:8182"
+
+{{- else if eq .Values.manager.service.type "NodePort" }}
+
+    $ export MANAGER_NODE_IP=$(sudo kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}" -n {{ .Release.Namespace }})
+    $ export MANAGER_NODE_PORT=$(sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.manager.component }} -o jsonpath="{.spec.ports[0].nodePort}" -n {{ .Release.Namespace }})
+    $ echo "InLong Manager URL: http://$MANAGER_NODE_IP:$MANAGER_NODE_PORT"
+
+{{- else if eq .Values.manager.service.type "LoadBalancer" }}
+
+    NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+          You can check the status by running 'sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.manager.component }} -n {{ .Release.Namespace }} -w'
+
+    $ export MANAGER_SERVICE_IP=$(sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.manager.component }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}" -n {{ .Release.Namespace }})
+    $ echo "InLong Manager URL: http://$MANAGER_SERVICE_IP:{{ .Values.manager.service.nodePort }}"
+
+{{- end }}
+
+3. Access InLong DataProxy by running these commands:
+
+{{- if .Values.ingress.enabled }}
+
+    InLong DataProxy URL: http{{ if .Values.ingress.tls.enabled }}s{{ end }}://{{ .Values.ingress.host }}/{{ .Values.ingress.path }}
+
+{{- else if eq .Values.dataproxy.service.type "ClusterIP" }}
+
+    $ export DATA_PROXY_POD_NAME=$(sudo kubectl get pods -l "app.kubernetes.io/name={{ template "inlong.name" . }}-{{ .Values.dataproxy.component }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}" -n {{ .Release.Namespace }})
+    $ export DATA_PROXY_CONTAINER_PORT=$(sudo kubectl get pod $DATA_PROXY_POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}" -n {{ .Release.Namespace }})
+    $ sudo kubectl port-forward $DATA_PROXY_POD_NAME 8183:$DATA_PROXY_CONTAINER_PORT -n {{ .Release.Namespace }}
+    $ echo "InLong DataProxy URL: http://127.0.0.1:8183"
+
+{{- else if eq .Values.dataproxy.service.type "NodePort" }}
+
+    $ export DATA_PROXY_NODE_IP=$(sudo kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}" -n {{ .Release.Namespace }})
+    $ export DATA_PROXY_NODE_PORT=$(sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.dataproxy.component }} -o jsonpath="{.spec.ports[0].nodePort}" -n {{ .Release.Namespace }})
+    $ echo "InLong DataProxy URL: http://$DATA_PROXY_NODE_IP:$DATA_PROXY_NODE_PORT"
+
+{{- else if eq .Values.dataproxy.service.type "LoadBalancer" }}
+
+    NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+          You can check the status by running 'sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.dataproxy.component }} -n {{ .Release.Namespace }} -w'
+
+    $ export DATA_PROXY_SERVICE_IP=$(sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.dataproxy.component }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}" -n {{ .Release.Namespace }})
+    $ echo "InLong DataProxy URL: http://$DATA_PROXY_SERVICE_IP:{{ .Values.dataproxy.service.nodePort }}"
+
+{{- end }}
+
+4. Access InLong TubeMQ Master by running these commands:
+
+{{- if .Values.ingress.enabled }}
+
+    InLong TubeMQ Master URL: http{{ if .Values.ingress.tls.enabled }}s{{ end }}://{{ .Values.ingress.host }}/{{ .Values.ingress.path }}
+
+{{- else if eq .Values.tubemqMaster.service.type "ClusterIP" }}
+
+    $ export TUBEMQ_MASTER_POD_NAME=$(sudo kubectl get pods -l "app.kubernetes.io/name={{ template "inlong.name" . }}-{{ .Values.tubemqMaster.component }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}" -n {{ .Release.Namespace }})
+    $ export TUBEMQ_MASTER_CONTAINER_PORT=$(sudo kubectl get pod $TUBEMQ_MASTER_POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}" -n {{ .Release.Namespace }})
+    $ sudo kubectl port-forward $TUBEMQ_MASTER_POD_NAME 8183:$TUBEMQ_MASTER_CONTAINER_PORT -n {{ .Release.Namespace }}
+    $ echo "InLong TubeMQ Master URL: http://127.0.0.1:8183"
+
+{{- else if eq .Values.tubemqMaster.service.type "NodePort" }}
+
+    $ export TUBEMQ_MASTER_NODE_IP=$(sudo kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}" -n {{ .Release.Namespace }})
+    $ export TUBEMQ_MASTER_NODE_PORT=$(sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }} -o jsonpath="{.spec.ports[0].nodePort}" -n {{ .Release.Namespace }})
+    $ echo "InLong TubeMQ Master URL: http://$TUBEMQ_MASTER_NODE_IP:$TUBEMQ_MASTER_NODE_PORT"
+
+{{- else if eq .Values.tubemqMaster.service.type "LoadBalancer" }}
+
+    NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+          You can check the status by running 'sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }} -n {{ .Release.Namespace }} -w'
+
+    $ export TUBEMQ_MASTER_SERVICE_IP=$(sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.tubemqMaster.component }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}" -n {{ .Release.Namespace }})
+    $ echo "InLong TubeMQ Master URL: http://$TUBEMQ_MASTER_SERVICE_IP:{{ .Values.tubemqMaster.service.webNodePort }}"
+
+{{- end }}
+
+5. Access InLong TubeMQ Broker by running these commands:
+
+{{- if .Values.ingress.enabled }}
+
+    InLong TubeMQ Broker URL: http{{ if .Values.ingress.tls.enabled }}s{{ end }}://{{ .Values.ingress.host }}/{{ .Values.ingress.path }}
+
+{{- else if eq .Values.tubemqBroker.service.type "ClusterIP" }}
+
+    $ export TUBEMQ_BROKER_POD_NAME=$(sudo kubectl get pods -l "app.kubernetes.io/name={{ template "inlong.name" . }}-{{ .Values.tubemqBroker.component }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}" -n {{ .Release.Namespace }})
+    $ export TUBEMQ_BROKER_CONTAINER_PORT=$(sudo kubectl get pod $TUBEMQ_BROKER_POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}" -n {{ .Release.Namespace }})
+    $ sudo kubectl port-forward $TUBEMQ_BROKER_POD_NAME 8183:$TUBEMQ_BROKER_CONTAINER_PORT -n {{ .Release.Namespace }}
+    $ echo "InLong TubeMQ Broker URL: http://127.0.0.1:8183"
+
+{{- else if eq .Values.tubemqBroker.service.type "NodePort" }}
+
+    $ export TUBEMQ_BROKER_NODE_IP=$(sudo kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}" -n {{ .Release.Namespace }})
+    $ export TUBEMQ_BROKER_NODE_PORT=$(sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.tubemqBroker.component }} -o jsonpath="{.spec.ports[0].nodePort}" -n {{ .Release.Namespace }})
+    $ echo "InLong TubeMQ Broker URL: http://$TUBEMQ_BROKER_NODE_IP:$TUBEMQ_BROKER_NODE_PORT"
+
+{{- else if eq .Values.tubemqBroker.service.type "LoadBalancer" }}
+
+    NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+          You can check the status by running 'sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.tubemqBroker.component }} -n {{ .Release.Namespace }} -w'
+
+    $ export TUBEMQ_BROKER_SERVICE_IP=$(sudo kubectl get svc {{ template "inlong.fullname" . }}-{{ .Values.tubemqBroker.component }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}" -n {{ .Release.Namespace }})
+    $ echo "InLong TubeMQ Broker URL: http://$TUBEMQ_BROKER_SERVICE_IP:{{ .Values.tubemqBroker.service.webNodePort }}"
+
+{{- end }}
+
+To learn more about the release, try:
+
+    $ sudo helm status {{ .Release.Name }} -n {{ .Release.Namespace }}
+    $ sudo helm get all {{ .Release.Name }} -n {{ .Release.Namespace }}
+
+To uninstall the release, try:
+
+    $ sudo helm uninstall {{ .Release.Name }} -n {{ .Release.Namespace }}
+
+To delete all PVC if any persistent volume claims used, try:
+
+    $ sudo kubectl delete pvc -n {{ .Release.Namespace }} --all
+
+For more details, please check out https://inlong.apache.org/docs/next/deployment/k8s
diff --git a/docker/kubernetes/values.yaml b/docker/kubernetes/values.yaml
index ee9ee01fc..968aa1220 100644
--- a/docker/kubernetes/values.yaml
+++ b/docker/kubernetes/values.yaml
@@ -87,10 +87,13 @@ agent:
   # Optional duration in seconds the pod needs to terminate gracefully.
   # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
   terminationGracePeriodSeconds: 30
+  # Optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).
+  # For more details, please check out https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
   resources:
     requests:
       cpu: 1
       memory: "1Gi"
+  # The agent service port
   port: 8008
   env:
     AGENT_JVM_HEAP_OPTS: >-
@@ -130,10 +133,13 @@ dashboard:
   # Optional duration in seconds the pod needs to terminate gracefully.
   # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
   terminationGracePeriodSeconds: 30
+  # Optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).
+  # For more details, please check out https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
   resources:
     requests:
       cpu: 1
       memory: "1Gi"
+  # The dashboard service port
   port: 80
   service:
     # type determines how the service is exposed. Defaults to NodePort. Valid options are ClusterIP, NodePort, LoadBalancer, and ExternalName
@@ -179,10 +185,13 @@ dataproxy:
   # Optional duration in seconds the pod needs to terminate gracefully.
   # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
   terminationGracePeriodSeconds: 30
+  # Optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).
+  # For more details, please check out https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
   resources:
     requests:
       cpu: 1
       memory: "1Gi"
+  # The dataproxy service port
   port: 46801
   service:
     # type determines how the service is exposed. Defaults to NodePort. Valid options are ClusterIP, NodePort, LoadBalancer, and ExternalName
@@ -234,10 +243,13 @@ tubemqManager:
   # Optional duration in seconds the pod needs to terminate gracefully.
   # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
   terminationGracePeriodSeconds: 30
+  # Optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).
+  # For more details, please check out https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
   resources:
     requests:
       cpu: 1
       memory: "1Gi"
+  # The tubemq manager service port
   port: 8089
   env:
     TUBE_MANAGER_JVM_HEAP_OPTS: >-
@@ -276,10 +288,13 @@ manager:
   # Optional duration in seconds the pod needs to terminate gracefully.
   # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
   terminationGracePeriodSeconds: 30
+  # Optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).
+  # For more details, please check out https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
   resources:
     requests:
       cpu: 1
       memory: "1Gi"
+  # The manager service port
   port: 8083
   service:
     # type determines how the service is exposed. Defaults to NodePort. Valid options are ClusterIP, NodePort, LoadBalancer, and ExternalName
@@ -333,10 +348,13 @@ audit:
   # Optional duration in seconds the pod needs to terminate gracefully.
   # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
   terminationGracePeriodSeconds: 30
+  # Optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).
+  # For more details, please check out https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
   resources:
     requests:
       cpu: 1
       memory: "1Gi"
+  # The audit service port
   port: 10081
   env:
     AUDIT_JVM_HEAP_OPTS: >-
@@ -375,13 +393,16 @@ mysql:
   # Optional duration in seconds the pod needs to terminate gracefully.
   # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
   terminationGracePeriodSeconds: 30
+  # Optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).
+  # For more details, please check out https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
   resources:
     requests:
       cpu: 1
       memory: "1Gi"
+  # The mysql service port
+  port: 3306
   username: "root"
   password: "inlong"
-  port: 3306
   volumes:
     name: data
     size: "10Gi"
@@ -418,6 +439,13 @@ zookeeper:
   # Optional duration in seconds the pod needs to terminate gracefully.
   # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
   terminationGracePeriodSeconds: 30
+  # Optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).
+  # For more details, please check out https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+  resources:
+    requests:
+      cpu: 1
+      memory: "1Gi"
+  # The zookeeper service ports
   ports:
     client: 2181
     follower: 2888
@@ -433,10 +461,6 @@ zookeeper:
       failureThreshold: 10
       initialDelaySeconds: 10
       periodSeconds: 30
-  resources:
-    requests:
-      cpu: 1
-      memory: "1Gi"
   volumes:
     name: data
     size: "10Gi"
@@ -477,6 +501,13 @@ tubemqMaster:
   # Optional duration in seconds the pod needs to terminate gracefully.
   # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
   terminationGracePeriodSeconds: 30
+  # Optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).
+  # For more details, please check out https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+  resources:
+    requests:
+      cpu: 1
+      memory: "1Gi"
+  # The tubemq master service ports
   ports:
     rpcPort: 8715
     webPort: 8080
@@ -492,10 +523,6 @@ tubemqMaster:
       failureThreshold: 10
       initialDelaySeconds: 10
       periodSeconds: 30
-  resources:
-    requests:
-      cpu: 1
-      memory: "1Gi"
   volumes:
     name: data
     size: "10Gi"
@@ -555,6 +582,13 @@ tubemqBroker:
   # Optional duration in seconds the pod needs to terminate gracefully.
   # For more details, please check out https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
   terminationGracePeriodSeconds: 30
+  # Optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).
+  # For more details, please check out https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+  resources:
+    requests:
+      cpu: 1
+      memory: "1Gi"
+  # The tubemq broker service ports
   ports:
     rpcPort: 8123
     webPort: 8081
@@ -569,10 +603,6 @@ tubemqBroker:
       failureThreshold: 10
       initialDelaySeconds: 10
       periodSeconds: 30
-  resources:
-    requests:
-      cpu: 1
-      memory: "1Gi"
   volumes:
     name: data
     size: "10Gi"
@@ -602,7 +632,7 @@ tubemqBroker:
       -XX:MaxRAMPercentage=80.0
       -XX:-UseAdaptiveSizePolicy
 
-# InLong will use the external Services.
+# If exists external MySQL or Pulsar, you can set the 'enable' field value to true and configure related information.
 external:
   mysql:
     enabled: false
@@ -610,6 +640,7 @@ external:
     port: 3306
     username: "root"
     password: "password"
+  # If there is no external Pulsar, InLong will use TubeMQ.
   pulsar:
     enabled: false
     serviceUrl: "localhost:6650"