You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@skywalking.apache.org by wu...@apache.org on 2019/03/29 05:17:29 UTC
[incubator-skywalking] branch zipkin-trace updated: Finish doc and
reset application.yml
This is an automated email from the ASF dual-hosted git repository.
wusheng pushed a commit to branch zipkin-trace
in repository https://gitbox.apache.org/repos/asf/incubator-skywalking.git
The following commit(s) were added to refs/heads/zipkin-trace by this push:
new 87d2c81 Finish doc and reset application.yml
87d2c81 is described below
commit 87d2c81a223c7321daf467d43d225a9ef904ba5e
Author: Wu Sheng <wu...@foxmail.com>
AuthorDate: Thu Mar 28 22:17:19 2019 -0700
Finish doc and reset application.yml
---
docs/en/setup/backend/backend-receivers.md | 26 +++++++++++++++--
docs/en/setup/backend/backend-storage.md | 20 +++++++++++++
.../src/main/resources/application.yml | 34 +++++++---------------
3 files changed, 54 insertions(+), 26 deletions(-)
diff --git a/docs/en/setup/backend/backend-receivers.md b/docs/en/setup/backend/backend-receivers.md
index 237240c..c54b94b 100644
--- a/docs/en/setup/backend/backend-receivers.md
+++ b/docs/en/setup/backend/backend-receivers.md
@@ -11,8 +11,7 @@ We have following receivers, and `default` implementors are provided in our Apac
1. **receiver-jvm**. gRPC services accept JVM metric data.
1. **istio-telemetry**. Istio telemetry is from Istio official bypass adaptor, this receiver match its gRPC services.
1. **envoy-metric**. Envoy `metrics_service` supported by this receiver. OAL script support all GAUGE type metrics.
-1. **receiver_zipkin**. HTTP service accepts Span in Zipkin v1 and v2 formats. Notice, this receiver only
-works as expected in backend single node mode. Cluster mode is not supported. Welcome anyone to improve this.
+1. **receiver_zipkin**. See [details](#zipkin-receiver).
The sample settings of these receivers should be already in default `application.yml`, and also list here
```yaml
@@ -59,4 +58,25 @@ receiver-sharing-server:
```
Notice, if you add these settings, make sure they are not as same as core module,
-because gRPC/HTTP servers of core are still used for UI and OAP internal communications.
\ No newline at end of file
+because gRPC/HTTP servers of core are still used for UI and OAP internal communications.
+
+## Zipkin receiver
+Zipkin receiver could work in two different mode.
+1. Tracing mode(default). Tracing mode is that, skywalking OAP acts like zipkin collector, which provide persistence and query,
+but wouldn't analysis metric from them. In most case, I suggest you could use this feature, when metric come from service mesh.
+Also, in this mode, Zipkin receiver requires `zipkin-elasticsearch` storage implementation active.
+Read [this](backend-storage.md#elasticsearch-6-with-zipkin-trace-extension) to know
+how to active.
+1. Analysis mode(Not production ready), receive Zipkin v1/v2 formats through HTTP service. Transform the trace to skywalking
+native format, and analysis like skywalking trace. This feature can't work in production env, and
+because of Zipkin tag/endpoint value unpredictable, we can't make sure it fits production env requirements.
+
+Active `analysis mode`, you should set `needAnalysis` config.
+```yaml
+receiver_zipkin:
+ default:
+ host: ${SW_RECEIVER_ZIPKIN_HOST:0.0.0.0}
+ port: ${SW_RECEIVER_ZIPKIN_PORT:9411}
+ contextPath: ${SW_RECEIVER_ZIPKIN_CONTEXT_PATH:/}
+ needAnalysis: true
+```
\ No newline at end of file
diff --git a/docs/en/setup/backend/backend-storage.md b/docs/en/setup/backend/backend-storage.md
index b9baa3e..f769690 100644
--- a/docs/en/setup/backend/backend-storage.md
+++ b/docs/en/setup/backend/backend-storage.md
@@ -49,6 +49,26 @@ storage:
concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests
```
+### ElasticSearch 6 with Zipkin trace extension
+This implementation shares most of `elasticsearch`, just extend to support zipkin span storage.
+It has all same configs.
+```yaml
+storage:
+ zipkin-elasticsearch:
+ nameSpace: ${SW_NAMESPACE:""}
+ clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200}
+ user: ${SW_ES_USER:""}
+ password: ${SW_ES_PASSWORD:""}
+ indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:2}
+ indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:0}
+ # Batch process setting, refer to https://www.elastic.co/guide/en/elasticsearch/client/java-api/5.5/java-docs-bulk-processor.html
+ bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:2000} # Execute the bulk every 2000 requests
+ bulkSize: ${SW_STORAGE_ES_BULK_SIZE:20} # flush the bulk every 20mb
+ flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:10} # flush the bulk every 10 seconds whatever the number of requests
+ concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests
+```
+
+
### About Namespace
When namespace is set, names of all indexes in ElasticSearch will use it as prefix.
diff --git a/oap-server/server-starter/src/main/resources/application.yml b/oap-server/server-starter/src/main/resources/application.yml
index c421504..4ddffed 100644
--- a/oap-server/server-starter/src/main/resources/application.yml
+++ b/oap-server/server-starter/src/main/resources/application.yml
@@ -55,24 +55,7 @@ core:
dayMetricsDataTTL: ${SW_CORE_DAY_METRIC_DATA_TTL:45} # Unit is day
monthMetricsDataTTL: ${SW_CORE_MONTH_METRIC_DATA_TTL:18} # Unit is month
storage:
-# elasticsearch:
-# nameSpace: ${SW_NAMESPACE:""}
-# clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200}
-# user: ${SW_ES_USER:""}
-# password: ${SW_ES_PASSWORD:""}
-# indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:2}
-# indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:0}
-# # Batch process setting, refer to https://www.elastic.co/guide/en/elasticsearch/client/java-api/5.5/java-docs-bulk-processor.html
-# bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:2000} # Execute the bulk every 2000 requests
-# bulkSize: ${SW_STORAGE_ES_BULK_SIZE:20} # flush the bulk every 20mb
-# flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:10} # flush the bulk every 10 seconds whatever the number of requests
-# concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests
-# h2:
-# driver: ${SW_STORAGE_H2_DRIVER:org.h2.jdbcx.JdbcDataSource}
-# url: ${SW_STORAGE_H2_URL:jdbc:h2:mem:skywalking-oap-db}
-# user: ${SW_STORAGE_H2_USER:sa}
-# mysql:
- zipkin-elasticsearch:
+ elasticsearch:
nameSpace: ${SW_NAMESPACE:""}
clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200}
user: ${SW_ES_USER:""}
@@ -84,6 +67,11 @@ storage:
bulkSize: ${SW_STORAGE_ES_BULK_SIZE:20} # flush the bulk every 20mb
flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:10} # flush the bulk every 10 seconds whatever the number of requests
concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests
+# h2:
+# driver: ${SW_STORAGE_H2_DRIVER:org.h2.jdbcx.JdbcDataSource}
+# url: ${SW_STORAGE_H2_URL:jdbc:h2:mem:skywalking-oap-db}
+# user: ${SW_STORAGE_H2_USER:sa}
+# mysql:
receiver-sharing-server:
default:
receiver-register:
@@ -110,11 +98,11 @@ istio-telemetry:
default:
envoy-metric:
default:
-receiver_zipkin:
- default:
- host: ${SW_RECEIVER_ZIPKIN_HOST:0.0.0.0}
- port: ${SW_RECEIVER_ZIPKIN_PORT:9411}
- contextPath: ${SW_RECEIVER_ZIPKIN_CONTEXT_PATH:/}
+#receiver_zipkin:
+# default:
+# host: ${SW_RECEIVER_ZIPKIN_HOST:0.0.0.0}
+# port: ${SW_RECEIVER_ZIPKIN_PORT:9411}
+# contextPath: ${SW_RECEIVER_ZIPKIN_CONTEXT_PATH:/}
query:
graphql:
path: ${SW_QUERY_GRAPHQL_PATH:/graphql}