You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@skywalking.apache.org by ha...@apache.org on 2019/01/25 12:44:25 UTC

[incubator-skywalking-website] branch mesh-loadtest updated: fix some issues

This is an automated email from the ASF dual-hosted git repository.

hanahmily pushed a commit to branch mesh-loadtest
in repository https://gitbox.apache.org/repos/asf/incubator-skywalking-website.git


The following commit(s) were added to refs/heads/mesh-loadtest by this push:
     new 4963969  fix some issues
4963969 is described below

commit 4963969fd105c35a3d49975cea99e12a4b65f999
Author: gaohongtao <ha...@gmail.com>
AuthorDate: Fri Jan 25 20:44:14 2019 +0800

    fix some issues
---
 docs/blog/2019-01-25-mesh-loadtest.md | 18 +++++++++---------
 docs/blog/README.md                   |  2 +-
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/docs/blog/2019-01-25-mesh-loadtest.md b/docs/blog/2019-01-25-mesh-loadtest.md
index 261224d..82bb6cc 100644
--- a/docs/blog/2019-01-25-mesh-loadtest.md
+++ b/docs/blog/2019-01-25-mesh-loadtest.md
@@ -1,27 +1,27 @@
-# Performance Testing of Service Mesh Receiver
+# SkyWalking performance in Service Mesh scenario
 
 - Auther: Hongtao Gao, Apache SkyWalking & ShardingShpere PMC
 - [GitHub](https://github.com/hanahmily), [Twitter](https://twitter.com/hanahmily), [Linkedin](https://www.linkedin.com/in/gao-hongtao-47b835168/)
 
 Jan. 25th, 2019
 
-Service mesh receiver was first introduced in Apache SkyWalking 6.0.0-beta. It is designed to provide a common entrance for receiving telemetry data from service mesh framework, for instance, Istio, Linkerd, etc. What’s the service mesh? According to Istio’s explain:
+Service mesh receiver was first introduced in Apache SkyWalking 6.0.0-beta. It is designed to provide a common entrance for receiving telemetry data from service mesh framework, for instance, Istio, Linkerd, Envoy etc. What’s the service mesh? According to Istio’s explain:
 
 The term service mesh is used to describe the network of microservices that make up such applications and the interactions between them.
 
-As a PMC of Apache SkyWalking, I tested trace receiver and well understood the performance of collectors in trace scenario. I also would like to figure out the performance of service mesh receiver.
+As a PMC member of Apache SkyWalking, I tested trace receiver and well understood the performance of collectors in trace scenario. I also would like to figure out the performance of service mesh receiver.
 
 ## Different between trace and service mesh
 
 Following chart presents a typical trace map:
 
-![](/static/blog/2019-01-25-mesh-loadtest/image5.png)
+![](../.vuepress/public/static/blog/2019-01-25-mesh-loadtest/image5.png)
 
 You could find a variety of elements in it just like web service, local method, database, cache, MQ and so on. But service mesh only collect service network telemetry data that contains the entrance and exit data of a service. A smaller quantity of data is sent to the service mesh receiver than the trace.
 
 But using sidecar is a little different.
 
-![](/static/blog/2019-01-25-mesh-loadtest/image1.png)
+![](../.vuepress/public/static/blog/2019-01-25-mesh-loadtest/image1.png)
 
 The client requesting “A” that will send a segment to service mesh receiver from “A”’s sidecar. If “A” depends on “B”,  another segment will be sent from “A”’s sidecar. But for a trace system, only one segment is received by the collector. The sidecar model splits one segment into small segments, that will increase service mesh receiver network overhead.
 
@@ -45,7 +45,7 @@ Receiving mesh fragments per second(MPS) depends on the following variables.
 
 In this test, I use Bookinfo app as a demo cluster.
 
-![](/static/blog/2019-01-25-mesh-loadtest/image6.png)
+![](../.vuepress/public/static/blog/2019-01-25-mesh-loadtest/image6.png)
 
 So every request will touch max 4 nodes. Plus picking the sidecar mode(every request will send two telemetry data),  the MPS will be QPS * 4 *2. 
 
@@ -58,17 +58,17 @@ There are also some important metrics that should be explained
 
 ### Mini Unit
 
-![](/static/blog/2019-01-25-mesh-loadtest/image3.png)
+![](../.vuepress/public/static/blog/2019-01-25-mesh-loadtest/image3.png)
 
 You could find collector can process up to **25k** data per second. The CPU usage is about 4 cores. Most of the query latency is less than 50ms. After login the VM on which collector instance running, I know that system load is reaching the limit(max is 8).
 
-![](/static/blog/2019-01-25-mesh-loadtest/image2.png)
+![](../.vuepress/public/static/blog/2019-01-25-mesh-loadtest/image2.png)
 
 According to the previous formula, a single collector instance could process **3k** QPS of Bookinfo traffic.
 
 ### Standard Cluster
 
-![](/static/blog/2019-01-25-mesh-loadtest/image4.png)
+![](../.vuepress/public/static/blog/2019-01-25-mesh-loadtest/image4.png)
 
 Compare to the mini-unit, cluster’s throughput increases linearly. Three instances provide total 80k per second processing power. Query latency increases slightly, but it’s also very small(less than 500ms). I also checked every collector instance system load that all reached the limit. 10k QPS of BookInfo telemetry data could be processed by the cluster.
 
diff --git a/docs/blog/README.md b/docs/blog/README.md
index 4ea5792..d2d6f9e 100755
--- a/docs/blog/README.md
+++ b/docs/blog/README.md
@@ -3,7 +3,7 @@ layout: LayoutBlog
 
 blog:
 
-- title: Performance Testing of Service Mesh Receiver
+- title: SkyWalking performance in Service Mesh scenario
   name: 2019-01-25-mesh-loadtest
   time: Hongtao Gao 25th, 2019
   short: Service mesh receiver performance test on Google Kubernetes Engine.