You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@skywalking.apache.org by wu...@apache.org on 2022/04/06 00:55:23 UTC

[skywalking] branch master updated: Update `k8s-monitoring`, `backend-telemetry` and `v9-version-upgrade` doc for v9. (#8813)

This is an automated email from the ASF dual-hosted git repository.

wusheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/skywalking.git


The following commit(s) were added to refs/heads/master by this push:
     new c66111c0d2 Update `k8s-monitoring`, `backend-telemetry` and `v9-version-upgrade` doc for v9. (#8813)
c66111c0d2 is described below

commit c66111c0d26093f32be3977275448182b469f98b
Author: Kai <wa...@foxmail.com>
AuthorDate: Wed Apr 6 08:55:15 2022 +0800

    Update `k8s-monitoring`, `backend-telemetry` and `v9-version-upgrade` doc for v9. (#8813)
---
 CHANGES.md                                       |   1 +
 docs/en/FAQ/v9-version-upgrade.md                |   3 +-
 docs/en/setup/backend/backend-k8s-monitoring.md  |  83 +++++------
 docs/en/setup/backend/backend-telemetry.md       |   2 +-
 docs/en/setup/backend/otel-collector-config.yaml | 169 ---------------------
 docs/en/setup/backend/otel-collector-oap.yaml    | 180 -----------------------
 6 files changed, 43 insertions(+), 395 deletions(-)

diff --git a/CHANGES.md b/CHANGES.md
index d4686220dd..602fef3a34 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -184,6 +184,7 @@ NOTICE, this sharding concept is NOT just for splitting data into different data
 * Add profiling doc, and remove service mesh intro doc(not necessary).
 * Add a doc for virtual database.
 * Rewrite UI introduction.
+* Update `k8s-monitoring`, `backend-telemetry` and `v9-version-upgrade` doc for v9.
 
 All issues and pull requests are [here](https://github.com/apache/skywalking/milestone/112?closed=1)
 
diff --git a/docs/en/FAQ/v9-version-upgrade.md b/docs/en/FAQ/v9-version-upgrade.md
index aec6902ca2..dc5e7bac7c 100644
--- a/docs/en/FAQ/v9-version-upgrade.md
+++ b/docs/en/FAQ/v9-version-upgrade.md
@@ -16,7 +16,8 @@ Notice **Incompatibility (1)**, the UI template configuration protocol is incomp
 2. MAL: [metric level function](../../../docs/en/concepts-and-designs/mal.md) add an required argument `Layer`. Previous MAL expressions should add this argument.
 3. LAL: [Extractor](../../../docs/en/concepts-and-designs/lal.md) add function `layer`. If don't set it manual, the default layer is `GENERAL` and the logs from `ALS` the
    default layer is `mesh`.
-4. Storage:add `service_id`, `short_name` and `layer` columns to table `ServiceTraffic`, add `layer` column to table `InstanceTraffic`.
+4. Storage:Add `service_id`, `short_name` and `layer` columns to table `ServiceTraffic`, add `layer` column to table `InstanceTraffic`.
    These data would be incompatible with previous versions.
    Make sure to remove the older `ServiceTraffic` and `InstanceTraffic` tables before OAP(v9) starts. 
    OAP would generate the new table in the start procedure, and recreate all existing services and instances when traffic comes.
+5. UI-template: Re-design for V9. Make sure to remove the older `ui_template` table before OAP(v9) starts.
diff --git a/docs/en/setup/backend/backend-k8s-monitoring.md b/docs/en/setup/backend/backend-k8s-monitoring.md
index aacfd70ddf..3785ff585e 100644
--- a/docs/en/setup/backend/backend-k8s-monitoring.md
+++ b/docs/en/setup/backend/backend-k8s-monitoring.md
@@ -1,9 +1,6 @@
 # K8s monitoring 
 SkyWalking leverages K8s kube-state-metrics and cAdvisor for collecting metrics data from K8s, and leverages OpenTelemetry Collector to transfer the metrics to
-[OpenTelemetry receiver](opentelemetry-receiver.md) and into the [Meter System](./../../concepts-and-designs/meter.md). This feature requires authorizing the OAP Server to access K8s's `API Server`.  
-We define the k8s-cluster as a `Service` in the OAP, and use `k8s-cluster::` as a prefix to identify it.  
-We define the k8s-node as an `Instance` in the OAP, and set its name as the K8s `node name`.  
-We define the k8s-service as an `Endpoint` in the OAP, and set its name as `$serviceName.$namespace`.  
+[OpenTelemetry receiver](opentelemetry-receiver.md) and into the [Meter System](./../../concepts-and-designs/meter.md). This feature requires authorizing the OAP Server to access K8s's `API Server`.
 
 ## Data flow
 1. K8s kube-state-metrics and cAdvisor collect metrics data from K8s.
@@ -13,51 +10,51 @@ We define the k8s-service as an `Endpoint` in the OAP, and set its name as `$ser
 ## Setup 
 1. Setup [kube-state-metric](https://github.com/kubernetes/kube-state-metrics#kubernetes-deployment).
 2. cAdvisor is integrated into `kubelet` by default.
-3. Set up [OpenTelemetry Collector ](https://opentelemetry.io/docs/collector/getting-started/#kubernetes). For details on Prometheus Receiver in OpenTelemetry Collector for K8s, refer to [here](https://github.com/prometheus/prometheus/blob/main/documentation/examples/prometheus-kubernetes.yml). For a quick start, we have provided a full example for OpenTelemetry Collector configuration [otel-collector-config.yaml](otel-collector-config.yaml).
+3. Set up [OpenTelemetry Collector ](https://opentelemetry.io/docs/collector/getting-started/#kubernetes). For details on Prometheus Receiver in OpenTelemetry Collector for K8s, refer to [here](https://github.com/prometheus/prometheus/blob/main/documentation/examples/prometheus-kubernetes.yml). 
+For a quick start, we have provided a full example of configuration and recommended version , you can refer to [showcase](https://github.com/apache/skywalking-showcase/tree/main/deploy/platform/kubernetes/feature-kubernetes-monitor).
 4. Config SkyWalking [OpenTelemetry receiver](opentelemetry-receiver.md).
 
-## Supported Metrics
-From the different points of view to monitor K8s, there are 3 kinds of metrics: [Cluster](#cluster) / [Node](#node) / [Service](#service) 
-
-### Cluster 
-These metrics are related to the selected cluster (`Current Service in the dashboard`).
+## K8s Cluster Monitoring
+K8s cluster monitoring provide monitoring of the status and resources of the K8S Cluster, including the whole 
+cluster and each node. K8s cluster as a `Service` in OAP, K8s node as a `Instance` in OAP, and land on the `Layer: K8S`.
 
+### K8s Cluster Supported Metrics
 | Monitoring Panel | Unit | Metric Name | Description | Data Source |
-|-----|-----|-----|-----|-----|
-| Node Total |  | k8s_cluster_node_total | The number of nodes | K8s kube-state-metrics|
-| Namespace Total |  | k8s_cluster_namespace_total | The number of namespaces | K8s kube-state-metrics|
-| Deployment Total |  | k8s_cluster_deployment_total | The number of deployments | K8s kube-state-metrics|
-| Service Total |  | k8s_cluster_service_total | The number of services | K8s kube-state-metrics|
-| Pod Total |  | k8s_cluster_pod_total | The number of pods | K8s kube-state-metrics|
-| Container Total |  | k8s_cluster_container_total | The number of containers | K8s kube-state-metrics|
-| CPU Resources | m | k8s_cluster_cpu_cores<br />k8s_cluster_cpu_cores_requests<br />k8s_cluster_cpu_cores_limits<br />k8s_cluster_cpu_cores_allocatable | The capacity and the Requests / Limits / Allocatable of the CPU | K8s kube-state-metrics|
-| Memory Resources | GB | k8s_cluster_memory_total<br />k8s_cluster_memory_requests<br />k8s_cluster_memory_limits<br />k8s_cluster_memory_allocatable | The capacity and the Requests / Limits / Allocatable of the memory | K8s kube-state-metrics|
-| Storage Resources | GB | k8s_cluster_storage_total<br />k8s_cluster_storage_allocatable | The capacity and allocatable of the storage | K8s kube-state-metrics|
-| Node Status |  | k8s_cluster_node_status | The current status of the nodes | K8s kube-state-metrics|
-| Deployment Status |  | k8s_cluster_deployment_status | The current status of the deployment | K8s kube-state-metrics|
-| Deployment Spec Replicas |  | k8s_cluster_deployment_spec_replicas | The number of desired pods for a deployment | K8s kube-state-metrics|
-| Service Status |  | k8s_cluster_service_pod_status | The services current status, depending on the related pods' status | K8s kube-state-metrics|
-| Pod Status Not Running |  | k8s_cluster_pod_status_not_running | The pods which are not running in the current phase | K8s kube-state-metrics|
-| Pod Status Waiting |  | k8s_cluster_pod_status_waiting | The pods and containers which are currently in the waiting status, with reasons shown | K8s kube-state-metrics|
-| Pod Status Terminated |  | k8s_cluster_container_status_terminated | The pods and containers which are currently in the terminated status, with reasons shown | K8s kube-state-metrics|
-
-### Node
-These metrics are related to the selected node (`Current Instance in the dashboard`).
+|-----|------|-----|-----|-----|
+| Node Total |      | k8s_cluster_node_total | The number of nodes | K8s kube-state-metrics|
+| Namespace Total |      | k8s_cluster_namespace_total | The number of namespaces | K8s kube-state-metrics|
+| Deployment Total |      | k8s_cluster_deployment_total | The number of deployments | K8s kube-state-metrics|
+| Service Total |      | k8s_cluster_service_total | The number of services | K8s kube-state-metrics|
+| Pod Total |      | k8s_cluster_pod_total | The number of pods | K8s kube-state-metrics|
+| Container Total |      | k8s_cluster_container_total | The number of containers | K8s kube-state-metrics|
+| CPU Resources | m    | k8s_cluster_cpu_cores<br />k8s_cluster_cpu_cores_requests<br />k8s_cluster_cpu_cores_limits<br />k8s_cluster_cpu_cores_allocatable | The capacity and the Requests / Limits / Allocatable of the CPU | K8s kube-state-metrics|
+| Memory Resources | Gi   | k8s_cluster_memory_total<br />k8s_cluster_memory_requests<br />k8s_cluster_memory_limits<br />k8s_cluster_memory_allocatable | The capacity and the Requests / Limits / Allocatable of the memory | K8s kube-state-metrics|
+| Storage Resources | Gi   | k8s_cluster_storage_total<br />k8s_cluster_storage_allocatable | The capacity and allocatable of the storage | K8s kube-state-metrics|
+| Node Status |      | k8s_cluster_node_status | The current status of the nodes | K8s kube-state-metrics|
+| Deployment Status |      | k8s_cluster_deployment_status | The current status of the deployment | K8s kube-state-metrics|
+| Deployment Spec Replicas |      | k8s_cluster_deployment_spec_replicas | The number of desired pods for a deployment | K8s kube-state-metrics|
+| Service Status |      | k8s_cluster_service_pod_status | The services current status, depending on the related pods' status | K8s kube-state-metrics|
+| Pod Status Not Running |      | k8s_cluster_pod_status_not_running | The pods which are not running in the current phase | K8s kube-state-metrics|
+| Pod Status Waiting |      | k8s_cluster_pod_status_waiting | The pods and containers which are currently in the waiting status, with reasons shown | K8s kube-state-metrics|
+| Pod Status Terminated |      | k8s_cluster_container_status_terminated | The pods and containers which are currently in the terminated status, with reasons shown | K8s kube-state-metrics|
 
+### K8s Cluster Node Supported Metrics
 | Monitoring Panel | Unit | Metric Name | Description | Data Source |
-|-----|-----|-----|-----|-----|
-| Pod Total |  | k8s_node_pod_total | The number of pods in this node | K8s kube-state-metrics |
-| Node Status |  | k8s_node_node_status | The current status of this node | K8s kube-state-metrics |
-| CPU Resources | m | k8s_node_cpu_cores<br />k8s_node_cpu_cores_allocatable<br />k8s_node_cpu_cores_requests<br />k8s_node_cpu_cores_limits |  The capacity and the requests / Limits / Allocatable of the CPU  | K8s kube-state-metrics |
-| Memory Resources | GB | k8s_node_memory_total<br />k8s_node_memory_allocatable<br />k8s_node_memory_requests<br />k8s_node_memory_limits | The capacity and the requests / Limits / Allocatable of the memory | K8s kube-state-metrics |
-| Storage Resources | GB | k8s_node_storage_total<br />k8s_node_storage_allocatable | The capacity and allocatable of the storage | K8s kube-state-metrics |
-| CPU Usage | m | k8s_node_cpu_usage | The total usage of the CPU core, if there are 2 cores the maximum usage is 2000m | cAdvisor |
-| Memory Usage | GB | k8s_node_memory_usage | The totaly memory usage | cAdvisor |
+|-----|------|-----|-----|-----|
+| Pod Total |      | k8s_node_pod_total | The number of pods in this node | K8s kube-state-metrics |
+| Node Status |      | k8s_node_node_status | The current status of this node | K8s kube-state-metrics |
+| CPU Resources | m    | k8s_node_cpu_cores<br />k8s_node_cpu_cores_allocatable<br />k8s_node_cpu_cores_requests<br />k8s_node_cpu_cores_limits |  The capacity and the requests / Limits / Allocatable of the CPU  | K8s kube-state-metrics |
+| Memory Resources | Gi   | k8s_node_memory_total<br />k8s_node_memory_allocatable<br />k8s_node_memory_requests<br />k8s_node_memory_limits | The capacity and the requests / Limits / Allocatable of the memory | K8s kube-state-metrics |
+| Storage Resources | Gi   | k8s_node_storage_total<br />k8s_node_storage_allocatable | The capacity and allocatable of the storage | K8s kube-state-metrics |
+| CPU Usage | m    | k8s_node_cpu_usage | The total usage of the CPU core, if there are 2 cores the maximum usage is 2000m | cAdvisor |
+| Memory Usage | Gi   | k8s_node_memory_usage | The totaly memory usage | cAdvisor |
 | Network I/O| KB/s | k8s_node_network_receive<br />k8s_node_network_transmit | The network receive and transmit | cAdvisor |
 
-### Service
-In these metrics, the pods are related to the selected service (`Current Endpoint in the dashboard`).
+## K8s Service Monitoring
+K8s Service Monitoring provide observe service status and resources from Kubernetes.
+K8s Service as a `Service` in OAP and land on the `Layer: K8S_SERVICE`.
 
+### K8s Service Supported Metrics
 | Monitoring Panel | Unit | Metric Name | Description | Data Source |
 |-----|-----|-----|-----|-----|
 | Service Pod Total |  | k8s_service_pod_total | The number of pods | K8s kube-state-metrics |
@@ -69,11 +66,9 @@ In these metrics, the pods are related to the selected service (`Current Endpoin
 | Pod Waiting |  | k8s_service_pod_status_waiting | The pods and containers which are currently in the waiting status, with reasons shown | K8s kube-state-metrics |
 | Pod Terminated |  | k8s_service_pod_status_terminated | The pods and containers which are currently in the terminated status, with reasons shown | K8s kube-state-metrics |
 | Pod Restarts |  | k8s_service_pod_status_restarts_total | The number of per container restarts related to the pods | K8s kube-state-metrics |
-| Pod Network Receive | KB/s | k8s_service_pod_network_receive | The network receive of the pods | cAdvisor |
-| Pod Network Transmit | KB/s | k8s_service_pod_network_transmit | The network transmit of the pods  | cAdvisor |
-| Pod Storage Usage | MB | k8s_service_pod_fs_usage | The storage resources total usage of pods related to this service | cAdvisor |
 
 ## Customizing 
 You can customize your own metrics/expression/dashboard panel.   
 The metrics definition and expression rules are found in `/config/otel-oc-rules/k8s-cluster.yaml,/config/otel-oc-rules/k8s-node.yaml, /config/otel-oc-rules/k8s-service.yaml`.  
-The dashboard panel configurations are found in `/config/ui-initialized-templates/k8s.yml`.
+The K8s Cluster dashboard panel configurations are found in `/config/ui-initialized-templates/k8s`.
+The K8s Service dashboard panel configurations are found in `/config/ui-initialized-templates/k8s_service`.
diff --git a/docs/en/setup/backend/backend-telemetry.md b/docs/en/setup/backend/backend-telemetry.md
index e0a76d65f3..06178952d9 100644
--- a/docs/en/setup/backend/backend-telemetry.md
+++ b/docs/en/setup/backend/backend-telemetry.md
@@ -144,7 +144,7 @@ Set this up following these steps:
     regex: (.+)
     replacement: $$1 
 ```
-For the full example for OpenTelemetry Collector configuration and recommended version, you can refer to [otel-collector-oap.yaml](otel-collector-oap.yaml).
+For the full example for OpenTelemetry Collector configuration and recommended version, you can refer to [showcase](https://github.com/apache/skywalking-showcase/tree/main/deploy/platform/kubernetes/feature-so11y).
 
 
 
diff --git a/docs/en/setup/backend/otel-collector-config.yaml b/docs/en/setup/backend/otel-collector-config.yaml
deleted file mode 100644
index ea823ed7e5..0000000000
--- a/docs/en/setup/backend/otel-collector-config.yaml
+++ /dev/null
@@ -1,169 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: otel-collector-conf
-  labels:
-    app: opentelemetry
-    component: otel-collector-conf
-  namespace: monitoring
-data:
-  otel-collector-config: |
-    receivers:
-      prometheus:
-        config:
-          global:
-            scrape_interval: 15s
-            evaluation_interval: 15s
-          scrape_configs:
-            - job_name: 'kubernetes-cadvisor'
-              scheme: https
-              tls_config:
-                ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
-              bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
-              kubernetes_sd_configs:
-              - role: node
-              relabel_configs:
-              - action: labelmap
-                regex: __meta_kubernetes_node_label_(.+)
-              - source_labels: []       # relabel the cluster name 
-                target_label: cluster
-                replacement: gke-cluster-1
-              - target_label: __address__
-                replacement: kubernetes.default.svc:443
-              - source_labels: [__meta_kubernetes_node_name]
-                regex: (.+)
-                target_label: __metrics_path__
-                replacement: /api/v1/nodes/$${1}/proxy/metrics/cadvisor
-              - source_labels: [instance]   # relabel the node name 
-                separator: ;
-                regex: (.+)
-                target_label: node
-                replacement: $$1
-                action: replace
-            - job_name: kube-state-metrics
-              kubernetes_sd_configs:
-              - role: endpoints
-              relabel_configs:
-              - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
-                regex: kube-state-metrics
-                replacement: $$1
-                action: keep
-              - action: labelmap
-                regex: __meta_kubernetes_service_label_(.+)
-              - source_labels: []  # relabel the cluster name 
-                target_label: cluster
-                replacement: gke-cluster-1
-    processors:
-      batch:
-    extensions:
-      health_check: {}
-      zpages: {}
-    exporters:
-      opencensus:
-        endpoint: "OAP:11800" # The OAP Server address
-        insecure: true    
-      logging:
-        logLevel: debug
-    service:
-      extensions: [health_check, zpages]
-      pipelines:
-        metrics:
-          receivers: [prometheus]
-          processors: [batch]
-          exporters: [opencensus,logging]
----
-apiVersion: v1
-kind: Service
-metadata:
-  name: otel-collector
-  labels:
-    app: opentelemetry
-    component: otel-collector
-  namespace: monitoring
-spec:
-  ports:
-  - name: otlp # Default endpoint for OpenTelemetry receiver.
-    port: 55680
-    protocol: TCP
-    targetPort: 55680
-  - name: metrics # Default endpoint for querying metrics.
-    port: 8888
-  selector:
-    component: otel-collector
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: otel-collector
-  labels:
-    app: opentelemetry
-    component: otel-collector
-  namespace: monitoring
-spec:
-  selector:
-    matchLabels:
-      app: opentelemetry
-      component: otel-collector
-  minReadySeconds: 5
-  progressDeadlineSeconds: 120
-  replicas: 1 #TODO - adjust this to your own requirements
-  template:
-    metadata:
-      labels:
-        app: opentelemetry
-        component: otel-collector
-    spec:
-      containers:
-      - command:
-          - "/otelcol"
-          - "--config=/conf/otel-collector-config.yaml"
-          - "--log-level=DEBUG"
-#           Memory Ballast size should be max 1/3 to 1/2 of memory.
-          - "--mem-ballast-size-mib=683"
-        image: otel/opentelemetry-collector:0.29.0
-        name: otel-collector
-        resources:
-          limits:
-            cpu: 1
-            memory: 2Gi
-          requests:
-            cpu: 200m
-            memory: 400Mi
-        ports:
-        - containerPort: 55679 # Default endpoint for ZPages.
-        - containerPort: 55680 # Default endpoint for OpenTelemetry receiver.
-        - containerPort: 8888  # Default endpoint for querying metrics.
-        volumeMounts:
-        - name: otel-collector-config-vol
-          mountPath: /conf
-        livenessProbe:
-          httpGet:
-            path: /
-            port: 13133 # Health Check extension default port.
-        readinessProbe:
-          httpGet:
-            path: /
-            port: 13133 # Health Check extension default port.
-      volumes:
-        - configMap:
-            name: otel-collector-conf
-            items:
-              - key: otel-collector-config
-                path: otel-collector-config.yaml
-          name: otel-collector-config-vol
-          
diff --git a/docs/en/setup/backend/otel-collector-oap.yaml b/docs/en/setup/backend/otel-collector-oap.yaml
deleted file mode 100644
index 652150b9d1..0000000000
--- a/docs/en/setup/backend/otel-collector-oap.yaml
+++ /dev/null
@@ -1,180 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: otel-collector-conf
-  labels:
-    app: opentelemetry
-    component: otel-collector-conf
-  namespace: istio-system
-data:
-  otel-collector-config: |
-    receivers:
-      prometheus:
-        config:
-          global:
-            scrape_interval: 10s
-            evaluation_interval: 30s
-          scrape_configs:
-          - job_name: 'skywalking-so11y'
-            metrics_path: '/metrics'
-            kubernetes_sd_configs:
-            - role: pod
-            relabel_configs:
-            - source_labels: [__meta_kubernetes_pod_container_name, __meta_kubernetes_pod_container_port_name]
-              action: keep
-              regex: oap;prometheus-port 
-            - source_labels: []
-              target_label: service
-              replacement: oap-server
-            - source_labels: [__meta_kubernetes_pod_name]
-              target_label: host_name
-              regex: (.+)
-              replacement: $$1
-    processors:
-      batch:
-    extensions:
-      health_check: {}
-      zpages: {}
-    exporters:
-      opencensus:
-        endpoint: "skywalking-oap:11800" # The OAP Server address
-        insecure: true    
-      logging:
-        logLevel: debug
-    service:
-      extensions: [health_check, zpages]
-      pipelines:
-        metrics:
-          receivers: [prometheus]
-          processors: [batch]
-          exporters: [opencensus,logging]
----
-apiVersion: v1
-kind: Service
-metadata:
-  name: otel-collector
-  labels:
-    app: opentelemetry
-    component: otel-collector
-  namespace: istio-system
-spec:
-  ports:
-  - name: otlp # Default endpoint for OpenTelemetry receiver.
-    port: 55680
-    protocol: TCP
-    targetPort: 55680
-  - name: metrics # Default endpoint for querying metrics.
-    port: 8888
-  selector:
-    component: otel-collector
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: otel-collector
-  labels:
-    app: opentelemetry
-    component: otel-collector
-  namespace: istio-system
-spec:
-  selector:
-    matchLabels:
-      app: opentelemetry
-      component: otel-collector
-  minReadySeconds: 5
-  progressDeadlineSeconds: 120
-  replicas: 1 #TODO - adjust this to your own requirements
-  template:
-    metadata:
-      labels:
-        app: opentelemetry
-        component: otel-collector
-    spec:
-      containers:
-      - command:
-          - "/otelcol"
-          - "--config=/conf/otel-collector-config.yaml"
-          - "--log-level=DEBUG"
-#           Memory Ballast size should be max 1/3 to 1/2 of memory.
-          - "--mem-ballast-size-mib=683"
-        image: otel/opentelemetry-collector:0.29.0
-        name: otel-collector
-        resources:
-          limits:
-            cpu: 1
-            memory: 2Gi
-          requests:
-            cpu: 200m
-            memory: 400Mi
-        ports:
-        - containerPort: 55679 # Default endpoint for ZPages.
-        - containerPort: 55680 # Default endpoint for OpenTelemetry receiver.
-        - containerPort: 8888  # Default endpoint for querying metrics.
-        volumeMounts:
-        - name: otel-collector-config-vol
-          mountPath: /conf
-        livenessProbe:
-          httpGet:
-            path: /
-            port: 13133 # Health Check extension default port.
-        readinessProbe:
-          httpGet:
-            path: /
-            port: 13133 # Health Check extension default port.
-      volumes:
-        - configMap:
-            name: otel-collector-conf
-            items:
-              - key: otel-collector-config
-                path: otel-collector-config.yaml
-          name: otel-collector-config-vol
----
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRole
-metadata:
-  name: otel-collector
-rules:
-- apiGroups: [""]
-  resources:
-  - services
-  - endpoints
-  - pods
-  verbs: ["get", "list", "watch"]
-- apiGroups:
-  - extensions
-  resources:
-  - ingresses
-  verbs: ["get", "list", "watch"]
-- nonResourceURLs: ["/metrics"]
-  verbs: ["get"]
----
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRoleBinding
-metadata:
-  labels:
-    app: opentelemetry
-    component: otel-collector
-  name: otel-collector
-roleRef:
-  apiGroup: rbac.authorization.k8s.io
-  kind: ClusterRole
-  name: otel-collector
-subjects:
-- kind: ServiceAccount
-  name: default
-  namespace: istio-system