You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by ki...@apache.org on 2022/03/16 07:58:23 UTC

[dolphinscheduler-website] branch master updated: add nesw (#736)

This is an automated email from the ASF dual-hosted git repository.

kirs pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 4b4e5a2  add nesw  (#736)
4b4e5a2 is described below

commit 4b4e5a2ecadced120dfc1303655564bf16c92528
Author: lifeng <53...@users.noreply.github.com>
AuthorDate: Wed Mar 16 15:58:15 2022 +0800

    add nesw  (#736)
    
    * add news How Does 360 DIGITECH process 10,000+ workflow instances per day by Apache DolphinScheduler
    
    add news How Does 360 DIGITECH process 10,000+ workflow instances per day by Apache DolphinScheduler
    
    * Update How_Does_360_DIGITECH_process_10_000+_workflow_instances_per_day.md
    
    * updata
    
    * add nesw hangzhou_cisco
    
    add nesw hangzhou_cisco
    
    * Update Hangzhou_cisco.md.md
    
    * Update Hangzhou_cisco.md
---
 blog/en-us/Hangzhou_cisco.md    | 138 +++++++++++++++++++++++++++++++++++++++
 blog/zh-cn/Hangzhou_cisco.md.md | 140 ++++++++++++++++++++++++++++++++++++++++
 img/3-16/1.png                  | Bin 0 -> 93268 bytes
 img/3-16/10.png                 | Bin 0 -> 65120 bytes
 img/3-16/11.png                 | Bin 0 -> 140032 bytes
 img/3-16/12.png                 | Bin 0 -> 36607 bytes
 img/3-16/2.png                  | Bin 0 -> 123611 bytes
 img/3-16/3.png                  | Bin 0 -> 55215 bytes
 img/3-16/4.png                  | Bin 0 -> 94633 bytes
 img/3-16/5.png                  | Bin 0 -> 173038 bytes
 img/3-16/6.png                  | Bin 0 -> 32950 bytes
 img/3-16/7.png                  | Bin 0 -> 72783 bytes
 img/3-16/8.png                  | Bin 0 -> 49048 bytes
 img/3-16/9.png                  | Bin 0 -> 73198 bytes
 img/3-16/Eng/1.png              | Bin 0 -> 93268 bytes
 img/3-16/Eng/10.png             | Bin 0 -> 43036 bytes
 img/3-16/Eng/11.png             | Bin 0 -> 49177 bytes
 img/3-16/Eng/12.png             | Bin 0 -> 33920 bytes
 img/3-16/Eng/2.png              | Bin 0 -> 123611 bytes
 img/3-16/Eng/3.png              | Bin 0 -> 109329 bytes
 img/3-16/Eng/4.png              | Bin 0 -> 45960 bytes
 img/3-16/Eng/5.png              | Bin 0 -> 42535 bytes
 img/3-16/Eng/6.png              | Bin 0 -> 66073 bytes
 img/3-16/Eng/7.png              | Bin 0 -> 30643 bytes
 img/3-16/Eng/8.png              | Bin 0 -> 53776 bytes
 img/3-16/Eng/9.png              | Bin 0 -> 78354 bytes
 site_config/blog.js             |  15 +++++
 site_config/home.jsx            |  29 +++++----
 28 files changed, 308 insertions(+), 14 deletions(-)

diff --git a/blog/en-us/Hangzhou_cisco.md b/blog/en-us/Hangzhou_cisco.md
new file mode 100644
index 0000000..da6fac3
--- /dev/null
+++ b/blog/en-us/Hangzhou_cisco.md
@@ -0,0 +1,138 @@
+# Cisco Hangzhou's Travel Through Apache DolphinScheduler Alert Module Refactor
+
+<div align=center>
+
+<img src="/img/3-16/Eng/1.png"/>
+
+</div>
+
+>Cisco Hangzhou has introduced Apache DolphinScheduler into the company's self-built big data platform. At present, the team of **Qingwang Li, Big Data Engineer from Cisco Hangzhou**has basically completed the Alert Module reform, which aims to build a more complete Alert module to meet the needs of complex alerts in business scenarios.
+<div align=center>
+
+<img src="/img/3-16/Eng/2.png"/>
+
+</div>
+
+Li Qingwang
+
+Big Data Engineer, Cisco Hangzhou, is responsible for big data development, such as Spark and the scheduling systems.
+We encountered many problems in using the original scheduling platform to process big data tasks. For example, for a task of processing and aggregated analysis of data, multiple pre-Spark tasks are used to process and analyze data from different data sources firstly, and the final Spark task aggregates and analyzes the results processed during this period to get the final data we want. Unfortunately, the scheduling platform could not execute multiple tasks serially, and we had to estimat [...]
+To our surprise, the core function of Apache DolphinScheduler - **workflow definition can connect tasks in series**, perfectly fits our needs. So, we introduced Apache DolphinScheduler into our big data platform, and I was mainly responsible for the Alert module reform. At present, other colleagues are promoting the integration of K8s, hoping that future tasks will be executed in K8s.
+Today, I will share the reform journey of the Alert module.
+
+## 01 **Alert Module Design**
+
+<div align=center>
+<img src="/img/3-16/Eng/3.png"/>
+</div>
+
+Design of the DolphinScheduler Alert module
+The Alert mode of Apache DolphinScheduler version 1.0 uses configuring alert.properties to  send alerts by configuring emails, SMS, etc., but this method is no longer suitable for the current scenario. The official has also refactored the alarm module. For details of the design ideas, please refer to the official documents:
+
+[https://github.com/apache/dolphinscheduler/issues/3049](https://github.com/apache/dolphinscheduler/issues/3049)
+
+[https://dolphinscheduler.apache.org/en-us/development/backend/spi/alert.html](https://dolphinscheduler.apache.org/en-us/development/backend/spi/alert.html)
+
+The Apache DolphinScheduler alert module is an independently started service, and one of the cores is the AlertPluginManager class. The alarm module integrates many plug-ins, such as DingTalk, WeChat, Feishu, mail, etc., which are written in the source code in an independent form. When the service is started, the plug-in will be parsed and the configured parameters will be formatted into JSON, by which the front-end page will automatically be rendered. AlertPluginManager caches plugins i [...]
+
+When the workflow is configured with a notification policy, the Worker executes the workflow, and the execution result matches the notification policy successfully. After inserting the alarm data into the DB, the thread pool scans the DB and calls the send method of the AlertSender class to transfer the alarm data. Alarm data is bound to an alarm group, which corresponds to multiple alarm instances. The AlertSender class traverses the alert instance, obtains the plug-in instance through  [...]
+
+It should be noted that the RPC service is also started when the Alert server is started. This is an alarm method designed for special types of tasks, such as SQL query reports. It allows workers to directly access the Alert server through RPC, and use the Alert module to complete the alarm, while the data is not written to DB. But on the whole, the alarm mode of Apache DolphinScheduler is still based on the way of writing DB and asynchronous interaction.
+
+<div align=center>
+
+<img src="/img/3-16/Eng/4.png"/>
+
+</div>
+
+After defining the workflow, you can set the notification policy and bind the alarm group before starting.
+
+<div align=center>
+
+<img src="/img/3-16/Eng/5.png"/>
+
+</div>
+
+In the task dimension, you can configure a timeout alarm to be triggered when the task times out. There is no alarm group configuration here, tasks and workflows share the same alarm group. When the task times out, it will be pushed to the alarm group set by the workflow.
+
+<div align=center>
+
+<img src="/img/3-16/Eng/6.png"/>
+
+</div>
+
+The above figure is a flowchart of system alarm configuration. It shows that a workflow can be configured with multiple task instances, tasks can be configured to timeout to trigger alarms, and workflow success or failure can trigger alarms. An alarm group can be bound to multiple alarm instances. But this configuration mode is not reasonable. We hope that the alarm instance can also match the status of the workflow/task instance, that is, the success and failure of the workflow call the [...]
+
+<div align=center>
+
+<img src="/img/3-16/Eng/7.png"/>
+
+</div>
+
+Create an alarm group that can be bound to multiple alarm instances.
+
+## 02 **Big data task alarm scenario**
+
+The following are some common big data task alarm scenarios in our daily work.
+
+<div align=center>
+
+<img src="/img/3-16/Eng/8.png"/>
+
+</div>
+
+For scheduled tasks, notifications are sent before starting execution, when the task goes online, goes offline, or modifies parameters, whether the task execution succeeds or fails. While for different results of the same task, we want to trigger different notifications, such as SMS, DingTalk, or WeChat group notification for successful tasks, and if the task fails, we need to notify the corresponding R&D personnel as soon as possible to get a faster response, and at this time, @correspo [...]
+
+Although the architecture of Apache DolphinScheduler meets the requirements of the actual scenario, the problem is that the page configuration of the alarm module can only choose to trigger the notification for successful or fail tasks, and it is bound to the same alarm group, that is, the way of alarming is the same regardless of success or failure, which does not satisfy our need for different results to be notified in different ways in a real production environment. Therefore, we made [...]
+
+## 03 **Alert module modification**
+
+<div align=center>
+
+<img src="/img/3-16/Eng/9.png"/>
+
+</div>
+
+The first refactor points to alert instance. Previously, when an alarm instance was added, triggering an alarm would trigger the send method of the instance. We hope that when defining an alarm instance, an alarm policy can be bound. There are three options: send if the task succeeds, send on failure, and send on both success and failure.
+
+In the task definition dimension, there is a timeout alarm function, which actually corresponds to the failed strategy.
+
+<div align=center>
+
+<img src="/img/3-16/Eng/10.png"/>
+
+</div>
+
+The above picture shows the completed configuration page. On the Create Alarm Instance page, we added an alarm type field, choosing to call the plugin on success, failure, or whether it succeeds or fails.
+
+<div align=center>
+
+<img src="/img/3-16/Eng/11.png"/>
+
+</div>
+
+The above picture shows the architecture of the Apache DolphinScheduler alarm module after the refactor. We have made two changes to it.
+
+First, when the workflow or task is executed, if an alarm is triggered, when writing to the DB, the execution result of the workflow or task will be saved, whether it succeeds or fails.
+
+Second, adds a logical judgment to the alarm instance calling send method, which matches the alarm instance with the task status, executes the alarm instance sending logic if it matches, and filters if it does not match.
+
+The alarm module refactored supports the following scenarios:
+
+<div align=center>
+
+<img src="/img/3-16/Eng/12.png"/>
+
+</div>
+
+For detailed design, please refer to the issue: [https://github.com/apache/dolphinscheduler/issues/7992](https://github.com/apache/dolphinscheduler/issues/7992)
+
+See the code for details: [https://github.com/apache/dolphinscheduler/pull/8636](https://github.com/apache/dolphinscheduler/pull/8636)
+
+In addition, we also put forward some proposals to the community for the alarm module of Apache DolphinScheduler. Welcome anyone who is interested in this issue to follow up the work together:
+
+* When the workflow starts or goes online or offline, or when parameters are modified, a notification can be triggered;
+* The alarming scenario is for worker monitoring. If the worker hangs up or disconnects from ZK and loses its heartbeat, it will consider the worker is down, trigger an alarm, and match the alarm group with ID 1 by default. This setting is explained in the source code, which is easy to be ignored, and you won't likely to set the alarm group with ID 1, thus fails you to get the notification of worker downtime instantly;
+* The alarm module currently supports Feishu, DingTalk, WeChat, Email, and other plug-ins, which are commonly used by domestic users. While users abroad are more used to plug-ins like Webex Teams, or PagerDuty, a commonly used alarm plug-in abroad. We re-developed these and plug-ins and contributed them to the community. For now, there are some more commonly used plug-ins abroad, such as Microsoft Teams, etc., anyone who is interested in it is recommended to submit a PR to the community.
+The last but not least, big data practitioners probably are not skilled with the front-end stuff and may quit by the front-end page development when developing and alarm plug-ins. But I'd like to point out that you do not need to write front-end code at all when developing the Apache DolphinScheduler alarm plug-in. You only need to configure the parameters to be entered on the page or the buttons to be selected in the Java code when creating a new alarm instance plug-in (see org.apache.d [...]
+
diff --git a/blog/zh-cn/Hangzhou_cisco.md.md b/blog/zh-cn/Hangzhou_cisco.md.md
new file mode 100644
index 0000000..cab6fba
--- /dev/null
+++ b/blog/zh-cn/Hangzhou_cisco.md.md
@@ -0,0 +1,140 @@
+# 杭州思科对 Apache DolphinScheduler Alert 模块的改造
+
+<div align=center>
+<img src="/img/3-16/1.png"/>
+</div>
+
+杭州思科已经将 Apache DolphinScheduler 引入公司自建的大数据平台。目前,**杭州思科大数据工程师 李庆旺** 负责 Alert 模块的改造已基本完成,以更完善的 Alert 模块适应实际业务中对复杂告警的需求。
+<div align=center>
+
+<img src="/img/3-16/2.png"/>
+
+</div>
+
+李庆旺
+
+杭州思科 大数据工程师,主要负责 Spark、调度系统等大数据方向开发。
+
+我们在使用原有的调度平台处理大数据任务时,在操作上多有不便。比如一个对数据进行处理聚合分析的任务,首先由多个前置 Spark 任务对不同数据源数据进行处理、分析。最后的 Spark 任务对这期间处理的结果进行再次聚合、分析,得到我们想要的最终数据。但遗憾的是当时的调度平台无法串行执行多个任务,需要估算任务处理时间来设置多个任务的开始执行时间。同时其中一个任务执行失败,需要手动停止后续任务。这种方式既不方便,也不优雅。
+
+而 Apache DolphinScheduler 的核心功能——**工作流定义可以将任务串联起来**,完美契合我们的需求。于是,我们将 Apache DolphinScheduler 引入自己的大数据平台,而我主要负责 Alert 模块改造。目前我们其他同事也在推进集成 K8s,希望未来任务在 K8s 中执行。
+
+今天分享的是 Alert 模块的改造。
+
+## 01 **Alert 模块的设计**
+
+<div align=center>
+
+<img src="/img/3-16/3.png"/>
+
+</div>
+
+DolphinScheduler Alert 模块的设计
+
+Apache DolphinScheduler 1.0 版本的 Alert 模式使用配置alert.properties的方式,通过配置邮箱、短信等实现告警,但这样的方式已经不适用于当前的场景了。官方也进行过告警模块重构,详情设计思路参考官方文档:
+
+[https://github.com/apache/dolphinscheduler/issues/3049](https://github.com/apache/dolphinscheduler/issues/3049)
+
+[https://dolphinscheduler.apache.org/zh-cn/development/backend/spi/alert.html](https://dolphinscheduler.apache.org/zh-cn/development/backend/spi/alert.html)
+
+
+Apache DolphinScheduler 告警模块是一个独立启动的服务,核心之一是 AlertPluginManager 类。告警模块集成了很多插件,如钉钉、微信、飞书、邮件等,以独立的形式写在源码中,启动服务时会解析插件并将配置的参数格式化成JSON形式,前端通过JSON自动渲染出页面。AlertPluginManager 在启动时会缓存插件到内存中。AlertServer类会启动线程池,定时扫描DB。
+
+当工作流配置了通知策略,同时Worker 执行工作流结束,执行结果匹配通知策略成功后,DB插入告警数据后,线程池扫描 DB,调用AlertSender 类的send方法传入告警数据。告警数据绑定的是告警组,一个告警组对应了多个告警实例。AlertSender类遍历告警实例,通过AlertPluginManager类获取插件实例,调用实例的发送方法,最后更新结果。这是 Apache DolphinScheduler 的整个告警流程。
+
+需要注意的是,Alert server 启动的同时也启动了 RPC 服务,这是一种针对特殊类型任务,如 SQL 查询报表而设计的告警方式,可以让 Worker 通过 RPC 直接访问 Alert  Server,利用 Alert 模块完成告警,这个数据不写入 DB。但从整体上来说,Apache DolphinScheduler 的告警模式还是以写 DB,异步交互的方式为主。
+
+<div align=center>
+<img src="/img/3-16/4.png"/>
+</div>
+
+定义工作流之后,可以在启动前设置通知策略,绑定告警组。
+
+<div align=center>
+
+<img src="/img/3-16/5.png"/>
+
+</div>
+
+在任务维度,可以配置超时告警,当任务超时可以触发报警。这里没有告警组配置,任务和工作流共用一个告警组,当任务超时,会推送到工作流设置的告警组。
+
+<div align=center>
+
+<img src="/img/3-16/6.png"/>
+
+</div>
+
+上图为系统告警配置的流程图。可以看到,一个工作流可以配置多个任务实例,任务可以配置超时触发告警,工作流成功或者失败可以触发告警。一个告警组可以绑定多个告警实例。这样的配置不太合理,我们希望告警实例也可以匹配工作流/任务实例的状态,也就是工作流成功和失败调用同一个告警组,但是触发不同的告警实例。这样使用起来更符合真实场景。
+
+<div align=center>
+
+<img src="/img/3-16/7.png"/>
+
+</div>
+
+创建告警组,一个告警组可以绑定多个告警实例。
+
+## 02 **大数据任务告警场景**
+
+<div align=center>
+
+<img src="/img/3-16/8.png"/>
+
+</div>
+
+以下是我们日常工作中的一些 常见的大数据任务告警场景。
+
+对于定时任务,在开始执行前、任务上线、下线或修改参数,以及任务执行成功或失败时都发送通知。区别是,对于同一任务不同结果,我们希望触发不同的通知,比如成功发短信通知或者钉钉微信群通知即可,而任务失败了需要在第一时间通知对应的研发人员,以得到更快的响应,这时候钉钉微信群中@对应研发人员或者电话通知会更及时。目前,公司的任务调度平台是任务中调用API 进行通知,这种与代码强耦合的方式极其不方便,实际上可以抽象成一个更为通用的模块来实现。
+Apache DolphinScheduler 的架构虽然符合实际场景需求,但问题在于告警模块页面配置只能选择成功触发通知,或失败触发通知,绑定的是同一个告警组,即无论成功还是失败,告警的途径是相同的,这一点并不满足我们在实际生产环境中需要不同结果以不同方式通知的需求。因此,我们对 Alert 模块进行了一些改造。
+
+## 03 **Alert 模块的改造**
+
+<div align=center>
+
+<img src="/2022-3-15/3-15/9.png"/>
+
+</div>
+
+改造的第一步是告警实例。此前,新增一个告警实例,触发告警就会触发该实例的 send 方法,我们希望在定义告警实例时可以绑定一个告警策略,有三个选项,成功发、失败发,以及成功和失败都发。
+
+
+在任务定义维度,有一个超时告警的功能,实际上对应失败的策略。
+
+<div align=center>
+
+<img src="/img/3-16/10.png"/>
+
+</div>
+
+上图为改造完成的配置页面,在创建告警实例页面,我们添加了一个告警类型字段,选择是在成功、失败,或者无论成功或失败时调用插件。
+
+<div align=center>
+
+<img src="/img/3-16/11.png"/>
+
+</div>
+
+上图为改造后Apache DolphinScheduler 告警模块的架构,我们对其中进行了两点改造。
+
+
+其一,在执行完工作流或任务时,如果触发告警,在写入DB时,会保存本次工作流或者任务的执行结果,具体是成功还是失败。
+
+第二,调用告警实例发送方法添加了一个逻辑判断,将告警实例与任务状态进行匹配,匹配则执行该告警实例发送逻辑,不匹配则过滤。
+
+
+改造后告警模块支持场景如下:
+
+<div align=center>
+<img src="/img/3-16/12.png"/>
+</div>
+
+详细设计请参考 issue:[https://github.com/apache/dolphinscheduler/issues/7992](https://github.com/apache/dolphinscheduler/issues/7992)
+
+代码详见:[https://github.com/apache/dolphinscheduler/pull/8636](https://github.com/apache/dolphinscheduler/pull/8636)
+
+此外,我们还针对 Apache DolphinScheduler 的告警模块向社区提出几点优化的建议,感兴趣的小伙伴可以跟进 issue,一起来做后续的工作:
+
+* 工作流启动或上下线或参数修改时,可以触发通知;
+* 告警场景针对 worker 的监控,如果 worker 挂掉或和 ZK 断开失去心跳,会认为 worker 宕机,会触发告警,但会默认匹配 ID 为 1 的告警组。这样的设置是在源码中写明的,但不看源码不知道其中的逻辑,不会专门设置ID为1的告警组,无法第一时间得到worker宕机的通知;
+* 告警模块目前支持飞书、钉钉、微信、邮件等多种插件,这些插件适用于国内用户,但国外用户可能使用不同的插件,如思科使用的 Webex Teams,国外常用告警插件 PagerDuty,我们也都进行开发并贡献给了社区。同时还有一些比较常用的比如Microsoft Teams等,感兴趣的小伙伴也可以提个PR,贡献到社区。
+最后一点,可能大数据领域的小伙伴对于前端不太熟悉,想要开发并贡献告警插件,但是想到需要开发前端就不想进行下去了。开发 Apache DolphinScheduler 告警插件是不需要写前端代码的,只需要在新建告警实例插件时,在 Java 代码中配置好页面中需要输入的参数或者需要选择的按钮(源码详见org.apache.dolphinscheduler.spi.params),系统会自动格式化成 JSON 格式,前端通过form-create 可以通过 JSON 自动渲染成页面。因此,完全不用担心写前端的问题。
diff --git a/img/3-16/1.png b/img/3-16/1.png
new file mode 100644
index 0000000..f194a61
Binary files /dev/null and b/img/3-16/1.png differ
diff --git a/img/3-16/10.png b/img/3-16/10.png
new file mode 100644
index 0000000..05dcba2
Binary files /dev/null and b/img/3-16/10.png differ
diff --git a/img/3-16/11.png b/img/3-16/11.png
new file mode 100644
index 0000000..80549c5
Binary files /dev/null and b/img/3-16/11.png differ
diff --git a/img/3-16/12.png b/img/3-16/12.png
new file mode 100644
index 0000000..bd2d623
Binary files /dev/null and b/img/3-16/12.png differ
diff --git a/img/3-16/2.png b/img/3-16/2.png
new file mode 100644
index 0000000..e675615
Binary files /dev/null and b/img/3-16/2.png differ
diff --git a/img/3-16/3.png b/img/3-16/3.png
new file mode 100644
index 0000000..9a39a55
Binary files /dev/null and b/img/3-16/3.png differ
diff --git a/img/3-16/4.png b/img/3-16/4.png
new file mode 100644
index 0000000..902b785
Binary files /dev/null and b/img/3-16/4.png differ
diff --git a/img/3-16/5.png b/img/3-16/5.png
new file mode 100644
index 0000000..47240cb
Binary files /dev/null and b/img/3-16/5.png differ
diff --git a/img/3-16/6.png b/img/3-16/6.png
new file mode 100644
index 0000000..9a034bc
Binary files /dev/null and b/img/3-16/6.png differ
diff --git a/img/3-16/7.png b/img/3-16/7.png
new file mode 100644
index 0000000..4c1a87d
Binary files /dev/null and b/img/3-16/7.png differ
diff --git a/img/3-16/8.png b/img/3-16/8.png
new file mode 100644
index 0000000..f0f163a
Binary files /dev/null and b/img/3-16/8.png differ
diff --git a/img/3-16/9.png b/img/3-16/9.png
new file mode 100644
index 0000000..94bc8e6
Binary files /dev/null and b/img/3-16/9.png differ
diff --git a/img/3-16/Eng/1.png b/img/3-16/Eng/1.png
new file mode 100644
index 0000000..f194a61
Binary files /dev/null and b/img/3-16/Eng/1.png differ
diff --git a/img/3-16/Eng/10.png b/img/3-16/Eng/10.png
new file mode 100644
index 0000000..3cf6015
Binary files /dev/null and b/img/3-16/Eng/10.png differ
diff --git a/img/3-16/Eng/11.png b/img/3-16/Eng/11.png
new file mode 100644
index 0000000..3bbbbde
Binary files /dev/null and b/img/3-16/Eng/11.png differ
diff --git a/img/3-16/Eng/12.png b/img/3-16/Eng/12.png
new file mode 100644
index 0000000..adc1b1a
Binary files /dev/null and b/img/3-16/Eng/12.png differ
diff --git a/img/3-16/Eng/2.png b/img/3-16/Eng/2.png
new file mode 100644
index 0000000..e675615
Binary files /dev/null and b/img/3-16/Eng/2.png differ
diff --git a/img/3-16/Eng/3.png b/img/3-16/Eng/3.png
new file mode 100644
index 0000000..9ed7b4b
Binary files /dev/null and b/img/3-16/Eng/3.png differ
diff --git a/img/3-16/Eng/4.png b/img/3-16/Eng/4.png
new file mode 100644
index 0000000..b9639c6
Binary files /dev/null and b/img/3-16/Eng/4.png differ
diff --git a/img/3-16/Eng/5.png b/img/3-16/Eng/5.png
new file mode 100644
index 0000000..7947b52
Binary files /dev/null and b/img/3-16/Eng/5.png differ
diff --git a/img/3-16/Eng/6.png b/img/3-16/Eng/6.png
new file mode 100644
index 0000000..306525f
Binary files /dev/null and b/img/3-16/Eng/6.png differ
diff --git a/img/3-16/Eng/7.png b/img/3-16/Eng/7.png
new file mode 100644
index 0000000..715e0cc
Binary files /dev/null and b/img/3-16/Eng/7.png differ
diff --git a/img/3-16/Eng/8.png b/img/3-16/Eng/8.png
new file mode 100644
index 0000000..805acde
Binary files /dev/null and b/img/3-16/Eng/8.png differ
diff --git a/img/3-16/Eng/9.png b/img/3-16/Eng/9.png
new file mode 100644
index 0000000..814083f
Binary files /dev/null and b/img/3-16/Eng/9.png differ
diff --git a/site_config/blog.js b/site_config/blog.js
index dbd5468..04a21fb 100644
--- a/site_config/blog.js
+++ b/site_config/blog.js
@@ -4,6 +4,14 @@ export default {
         postsTitle: 'All posts',
         list: [
             {
+
+                title: 'Cisco Hangzhou\'s Travel Through Apache DolphinScheduler Alert Module Refactor',
+                author: 'Debra Chen',
+                dateStr: '2022-3-16',
+                desc: 'Cisco Hangzhou has introduced Apache DolphinScheduler.. ',
+                link: '/en-us/blog/Hangzhou_cisco.html',
+            },
+            {
                 title: 'How Does 360 DIGITECH process 10,000+ workflow instances per day by Apache DolphinScheduler',
                 author: 'Debra Chen',
                 dateStr: '2022-3-15',
@@ -160,6 +168,13 @@ export default {
         postsTitle: '所有文章',
         list: [
             {
+                title: '杭州思科对 Apache DolphinScheduler Alert 模块的改造',
+                author: 'Debra Chen',
+                dateStr: '2022-3-16',
+                desc: '杭州思科已经将 Apache DolphinScheduler 引入公司自建的大数据平台......',
+                link: '/zh-cn/blog/Hangzhou_cisco.md.html',
+            },
+            {
                 title: '日均处理 10000+ 工作流实例,Apache DolphinScheduler 在 360 数科的实践',
                 author: 'Debra Chen',
                 dateStr: '2022-3-15',
diff --git a/site_config/home.jsx b/site_config/home.jsx
index 165744b..eb6ea0b 100644
--- a/site_config/home.jsx
+++ b/site_config/home.jsx
@@ -55,6 +55,13 @@ export default {
       title: '事件 & 新闻',
       list: [
         {
+          img: '/img/3-16/1.png',
+          title: '杭州思科对 Apache DolphinScheduler Alert 模块的改造',
+          content: '杭州思科已经将 Apache DolphinScheduler 引入公司自建的大数据平台..',
+          dateStr: '2022-3-7',
+          link: '/zh-cn/blog/Hangzhou_cisco.md.html',
+        },
+        {
           img: '/img/2022-3-11/1.jpeg',
           title: '日均处理 10000+ 工作流实例,Apache DolphinScheduler 在 360 数科的实践',
           content: '从 2020 年起,360 数科全面将调度系统从 Azkaban 迁移到 Apache DolphinScheduler...',
@@ -68,13 +75,6 @@ export default {
           dateStr: '2022-3-10',
           link: '/zh-cn/blog/Exploration_and_practice_of_Tujia_Big_Data_Platform_Based.html',
         },
-        {
-          img: '/img/2022-3-7/1.png',
-          title: 'Apache DolphinScheduler 2.0.5 发布,Worker 容错流程优化',
-          content: '今天,Apache DolphinScheduler 宣布 2.0.5 版本正式发布。..',
-          dateStr: '2022-3-7',
-          link: '/zh-cn/blog/Apache_dolphinScheduler_2.0.5.html',
-        },
       ],
     },
     ourusers: {
@@ -547,6 +547,14 @@ export default {
       title: 'Events & News',
       list: [
         {
+
+          img: '/img/3-16/1.png',
+          title: 'Cisco Hangzhou\'s Travel Through Apache DolphinScheduler Alert Module Refactor',
+          content: 'Cisco Hangzhou has introduced Apache DolphinScheduler....',
+          dateStr: '2022-3-7',
+          link: '/en-us/blog/Hangzhou_cisco.html',
+        },
+        {
           img: '/img/2022-3-11/1.jpeg',
           title: 'How Does 360 DIGITECH process 10,000+ workflow instances per day by Apache DolphinScheduler',
           content: 'ince 2020, 360 DIGITECH has fully migrated its scheduling system from Azkaban to Apache DolphinScheduler....',
@@ -560,13 +568,6 @@ export default {
           dateStr: '2022-3-10',
           link: '/en-us/blog/Exploration_and_practice_of_Tujia_Big_Data_Platform_Based.html',
         },
-        {
-          img: '/img/2022-3-7/1.png',
-          title: 'Release News! Apache DolphinScheduler 2_0_5 optimizes The Fault Tolerance Process of Worker',
-          content: 'Today, Apache DolphinScheduler announced the official release of version 2.0.5....',
-          dateStr: '2022-3-7',
-          link: '/en-us/blog/Apache_dolphinScheduler_2.0.5.html',
-        },
       ],
     },
     userreview: {