You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@eventmesh.apache.org by ch...@apache.org on 2023/01/12 12:40:36 UTC

[incubator-eventmesh-site] 01/02: update docs

This is an automated email from the ASF dual-hosted git repository.

chenguangsheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-eventmesh-site.git

commit ff3e0d7366d5e528c20072f86d64c8044f874994
Author: qqeasonchen <qq...@gmail.com>
AuthorDate: Thu Jan 12 20:38:10 2023 +0800

    update docs
---
 docs/design-document/01-workflow.md         |   8 +--
 docs/design-document/02-runtime-protocol.md |  10 +--
 docs/design-document/03-stream.md           |  12 ++--
 docs/design-document/04-schema-registry.md  |   8 +--
 docs/design-document/05-metrics-export.md   |   2 +-
 docs/design-document/06-tracing.md          |  87 -----------------------
 docs/design-document/07-cloudevents.md      | 106 ----------------------------
 docs/design-document/08-spi.md              |   2 +-
 docs/introduction.md                        |  56 ---------------
 docs/metrics-tracing/01-prometheus.md       |  24 -------
 docs/metrics-tracing/02-zipkin.md           |  38 ----------
 docs/metrics-tracing/_category_.json        |   5 --
 docs/roadmap.md                             |  35 ++++-----
 docs/sdk-java/02-http.md                    |  21 ++++++
 14 files changed, 61 insertions(+), 353 deletions(-)

diff --git a/docs/design-document/01-workflow.md b/docs/design-document/01-workflow.md
index 236e629..fecae1e 100644
--- a/docs/design-document/01-workflow.md
+++ b/docs/design-document/01-workflow.md
@@ -1,4 +1,4 @@
-# Workflow
+# EventMesh Workflow
 
 ## Business Problem
 
@@ -6,7 +6,7 @@ Imaging you are building a simple Order Management System for an E-Commerce Stor
 
 For high availability and high performance, you architect the system using event-driven architecture (EDA), and build microservice apps to handle store frontend, order management, payment processing, and shipment management. You deploy the whole system in a cloud environment. To handle high workloads, you leverage a messaging system to buffer the loads, and scale up multiple instances of microservices. The architecture could look similar to:
 
-![Workflow Use Case](/images/design-document/workflow-use-case.jpg)
+![Workflow Use Case](../../images/design-document/workflow-use-case.jpg)
 
 While each microservice is acting on its own event channels, EventMesh plays a crucial role of doing Event Orchestration.
 
@@ -181,13 +181,13 @@ events:
 
 The corresponding workflow diagram is the following:
 
-![Workflow Diagram](/images/design-document/workflow-diagram.png)
+![Workflow Diagram](../../images/design-document/workflow-diagram.png)
 
 ## EventMesh Workflow Engine
 
 In the following architecture diagram, the EventMesh Catalog, EventMesh Workflow Engine and EventMesh Runtime are running in three different processors.
 
-![Workflow Architecture](/images/design-document/workflow-architecture.jpg)
+![Workflow Architecture](../../images/design-document/workflow-architecture.jpg)
 
 The steps running the workflow is the followings:
 
diff --git a/docs/design-document/02-runtime-protocol.md b/docs/design-document/02-runtime-protocol.md
index f24ef57..b992bf8 100644
--- a/docs/design-document/02-runtime-protocol.md
+++ b/docs/design-document/02-runtime-protocol.md
@@ -1,4 +1,4 @@
-# Runtime Protocol
+# EventMesh Runtime Protocol
 
 ## TCP Protocol
 
@@ -128,15 +128,15 @@ public enum Command {
 
 #### Sync Message
 
-![Sync Message](/images/design-document/sync-message.png)
+![Sync Message](../../images/design-document/sync-message.png)
 
 #### Async Message
 
-![Async Message](/images/design-document/async-message.png)
+![Async Message](../../images/design-document/async-message.png)
 
 #### Boardcast Message
 
-![Boardcast Message](/images/design-document/broadcast-message.png)
+![Boardcast Message](../../images/design-document/broadcast-message.png)
 
 ## HTTP Protocol
 
@@ -246,7 +246,7 @@ The request header of the Send Async message is identical to the request header
 
 ### Protobuf
 
-The `eventmesh-protocol-gprc` module contains the [protobuf definition file](https://github.com/apache/incubator-eventmesh/blob/master/eventmesh-protocol-plugin/eventmesh-protocol-grpc/src/main/proto/eventmesh-client.proto) of the Evenmesh client. The `gradle build` command generates the gRPC codes, which are located in `/build/generated/source/proto/main`. The generated gRPC codes are used in `eventmesh-sdk-java` module.
+The `eventmesh-protocol-gprc` module contains the [protobuf definition file](https://github.com/apache/incubator-eventmesh/blob/master/eventmesh-protocol-plugin/eventmesh-protocol-grpc/src/main/proto/eventmesh-client.proto) of the Eventmesh client. The `gradle build` command generates the gRPC codes, which are located in `/build/generated/source/proto/main`. The generated gRPC codes are used in `eventmesh-sdk-java` module.
 
 ### Data Model
 
diff --git a/docs/design-document/03-stream.md b/docs/design-document/03-stream.md
index fe6ba68..d379ccd 100644
--- a/docs/design-document/03-stream.md
+++ b/docs/design-document/03-stream.md
@@ -1,4 +1,4 @@
-# Stream
+# EventMesh Stream
 
 ## Overview of Event Streaming
 
@@ -36,7 +36,7 @@ and easily integrate various systems consuming or producing data.
 
 ## Architecture
 
-![Stream Architecture](/images/design-document/stream-architecture.png)
+![Stream Architecture](../../images/design-document/stream-architecture.png)
 
 ## Design
 
@@ -83,7 +83,7 @@ The main advantage of the pipeline is that you can create complex event processi
 
 Component interface is the primary entry point, you can use Component object as a factory to create EndPoint objects.
 
-![Stream Component Interface](/images/design-document/stream-component-interface.png)
+![Stream Component Interface](../../images/design-document/stream-component-interface.png)
 
 ### EndPoint
 
@@ -92,14 +92,14 @@ EndPoint which is act as factories for creating Consumer, Producer and Event obj
 - `createConsumer()` — Creates a consumer endpoint, which represents the source endpoint at the beginning of a route.
 - `createProducer()` — Creates a producer endpoint, which represents the target endpoint at the end of a route.
 
-![Stream Component Routes](/images/design-document/stream-component-routes.png)
+![Stream Component Routes](../../images/design-document/stream-component-routes.png)
 
 #### Producer
 
 User can create following types of producer
 > Synchronous Producer-Processing thread blocks until the producer has finished the event processing.
 
-![Stream Sync Producer](/images/design-document/stream-sync-producer.png)
+![Stream Sync Producer](../../images/design-document/stream-sync-producer.png)
 
 In future Producer Types:
 
@@ -110,7 +110,7 @@ In future Producer Types:
 User can create following types of consumer
 > Event-driven consumer-the processing of an incoming request is initiated when message binder call a method in consumer.
 
-![Stream Event-Driven Consumer](/images/design-document/stream-event-driven-consumer.png)
+![Stream Event-Driven Consumer](../../images/design-document/stream-event-driven-consumer.png)
 
 In the Future
 
diff --git a/docs/design-document/04-schema-registry.md b/docs/design-document/04-schema-registry.md
index 65c0606..5c0b65f 100644
--- a/docs/design-document/04-schema-registry.md
+++ b/docs/design-document/04-schema-registry.md
@@ -1,4 +1,4 @@
-# Schema Registry (OpenSchema)
+# EventMesh Schema Registry (OpenSchema)
 
 ## Overview of Schema and Schema Registry
 
@@ -39,11 +39,11 @@ OpenSchema[[5]](#References) proposes a specification for data schema when excha
 
 ### Architecture
 
-![OpenSchema](/images/design-document/schema-registry-architecture.png)
+![OpenSchema](../../images/design-document/schema-registry-architecture.png)
 
 ### Process of Transferring Messages under Schema Registry
 
-![Process](/images/design-document/schema-registry-process.jpg)
+![Process](../../images/design-document/schema-registry-process.jpg)
 
 The highlevel process of messages transmission contains 10 steps as follows:
 
@@ -119,7 +119,7 @@ No. | Type | URL | response | exception | code | test
 
 ```CompatibilityController.java```+```CompatibilityService.java``` : ```OpenSchema 7.3.1~7.3.3 (API 11~13)``` + ```Check for Compatibility```
 
-![Project Structure](/images/design-document/schema-registry-project-structure.png)
+![Project Structure](../../images/design-document/schema-registry-project-structure.png)
 
 ## References
 
diff --git a/docs/design-document/05-metrics-export.md b/docs/design-document/05-metrics-export.md
index d286584..bab1876 100644
--- a/docs/design-document/05-metrics-export.md
+++ b/docs/design-document/05-metrics-export.md
@@ -1,4 +1,4 @@
-# Metrics (OpenTelemetry and Prometheus)
+# EventMesh Metrics (OpenTelemetry and Prometheus)
 
 ## Introduction
 
diff --git a/docs/design-document/06-tracing.md b/docs/design-document/06-tracing.md
deleted file mode 100644
index 1b19196..0000000
--- a/docs/design-document/06-tracing.md
+++ /dev/null
@@ -1,87 +0,0 @@
-# Distributed Tracing
-
-## Overview of OpenTelemetry
-
-OpenTelemetry is a collection of tools, APIs, and SDKs. You can use it to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) for analysis in order to understand your software's performance and behavior.
-
-## Requirements
-
-- set tracer
-- different exporter
-- start and end span in server
-
-## Design Details
-
-- SpanProcessor: BatchSpanProcessor
-
-- Exporter: log(default), would be changed from properties
-
-```java
-// Configure the batch spans processor. This span processor exports span in batches.
-BatchSpanProcessor batchSpansProcessor =
-    BatchSpanProcessor.builder(exporter)
-        .setMaxExportBatchSize(512) // set the maximum batch size to use
-        .setMaxQueueSize(2048) // set the queue size. This must be >= the export batch size
-        .setExporterTimeout(
-            30, TimeUnit.SECONDS) // set the max amount of time an export can run before getting
-        // interrupted
-        .setScheduleDelay(5, TimeUnit.SECONDS) // set time between two different exports
-        .build();
-OpenTelemetrySdk.builder()
-    .setTracerProvider(
-        SdkTracerProvider.builder().addSpanProcessor(batchSpansProcessor).build())
-    .build();
-```
-
-1. When using the method 'init()' of the class "EventMeshHTTPServer", the class "AbstractHTTPServer” will get the tracer
-
-```java
-super.openTelemetryTraceFactory = new OpenTelemetryTraceFactory(eventMeshHttpConfiguration);
-super.tracer = openTelemetryTraceFactory.getTracer(this.getClass().toString());
-super.textMapPropagator = openTelemetryTraceFactory.getTextMapPropagator();
-```
-
-2. then the trace in class "AbstractHTTPServer” will work.
-
-## Problems
-
-### How to set different exporter in class 'OpenTelemetryTraceFactory'? (Solved)
-
-After I get the exporter type from properties, how to deal with it.
-
-The 'logExporter' only needs to new it.
-
-But the 'zipkinExporter' needs to new and use the "getZipkinExporter()" method.
-
-## Solutions
-
-### Solution of different exporter
-
-Use reflection to get an exporter.
-
-First of all, different exporter must implement the interface 'EventMeshExporter'.
-
-Then we get the exporter name from the configuration and reflect to the class.
-
-```java
-//different spanExporter
-String exporterName = configuration.eventMeshTraceExporterType;
-//use reflection to get spanExporter
-String className = String.format("org.apache.eventmesh.runtime.exporter.%sExporter",exporterName);
-EventMeshExporter eventMeshExporter = (EventMeshExporter) Class.forName(className).newInstance();
-spanExporter = eventMeshExporter.getSpanExporter(configuration);
-```
-
-Additional, this will surround with try catch.If the specified exporter cannot be obtained successfully, the default exporter log will be used instead
-
-#### Improvement of different exporter
-
-SPI (To be completed)
-
-## Appendix
-
-### References
-
-<https://github.com/open-telemetry/docs-cn/blob/main/QUICKSTART.md>
-
-<https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation/netty>
diff --git a/docs/design-document/07-cloudevents.md b/docs/design-document/07-cloudevents.md
deleted file mode 100644
index bff91b4..0000000
--- a/docs/design-document/07-cloudevents.md
+++ /dev/null
@@ -1,106 +0,0 @@
-# CloudEvents Integration
-
-## Introduction
-
-[CloudEvents](https://github.com/cloudevents/spec) is a specification for describing event data in common formats to provide interoperability across services, platforms and systems.
-
-As of May 2021, EventMesh contains the following major components: `eventmesh-runtime`, `eventmesh-sdk-java` and `eventmesh-connector-rocketmq`.
-For a customer to use EventMesh, `eventmesh-runtime` can be deployed as microservices to transmit
-customer's events between event producers and consumers. Customer's applications can then interact
-with `eventmesh-runtime` using `eventmesh-sdk-java` to publish/subscribe for events on given topics.
-
-CloudEvents support has been a highly desired feature by EventMesh users. There are many reasons
-for users to prefer using a SDK with CloudEvents support:
-
-- CloudEvents is a more widely accepted and supported way to describe events. `eventmesh-sdk-java`
-  currently uses the `LiteMessage` structure to describe events, which is less standardized.
-- CloudEvents's Java SDK has a wider range of distribution methods. For example, EventMesh users
-  currently need to use the SDK tarball or build from source for every EventMesh release. With
-  CloudEvents support, it's easier for users to take a dependency on EventMesh's SDK using CloudEvents's public distributions (e.g. through a Maven configuration).
-- CloudEvents's SDK supports multiple languages. Although EventMesh currently only supports a Java SDK, in future if more languages need to be supported, the extensions can be easier with experience on binding Java SDK with CloudEvents.
-
-## Requirements
-
-### Functional Requirements
-
-| Requirement ID | Requirement Description | Comments |
-| -------------- | ----------------------- | -------- |
-| F-1            | EventMesh users should be able to depend on a public SDK to publish/subscribe events in CloudEvents format | Functionality |
-| F-2            | EventMesh users should continue to have access to existing EventMesh client features (e.g. load balancing) with an SDK that supports CloudEvent | Feature Parity |
-| F-3            | EventMesh developers should be able to sync `eventmesh-sdk-java` and an SDK with CloudEvents support without much effort/pain | Maintainability |
-| F-4 | EventMesh support pluggable protocols for developers integrate other protocols (e.g. CloudEvents\EventMesh Message\OpenMessage\MQTT ...) | Functionality |
-| F-5 | EventMesh support the unified api for publish/subscribe events to/from event store | Functionality |
-
-### Performance Requirements
-
-| Requirement ID | Requirement Description | Comments |
-| -------------- | ----------------------- | -------- |
-| P-1            | Client side latency for SDK with CloudEvents support should be similar to current SDK | |
-
-## Design Details
-
-Binding with the CloudEvents Java SDK (similar to what Kafka already did, see Reference for more details)
-should be an easy way to achieve the requirements.
-
-### Pluggable Protocols
-
-![Pluggable Protocols](/images/design-document/cloudevents-pluggable-protocols.png)
-
-### Process of CloudEvents under EventMesh
-
-#### For TCP
-
-##### SDK side for publish
-
-- add the CloudEvents identifier in `package` header
-- use `CloudEventBuilder` build the CloudEvent and put it into the `package` body
-
-##### SDK side for subscribe
-
-- add `convert` function under the `ReceiveMsgHook` interface, for converting the `package` body to the specific protocol with the identifier in `package` header
-- different protocols should implement the `ReceiveMsgHook`  interface
-
-##### Server side for publish
-
-- design the protocol convert api contains `decodeMessage` interface which convert the package's body to CloudEvent
-- update `Session.upstreamMsg()` in `MessageTransferTask` change the input parameter Message to CloudEvent, the CloudEvent use the last step `decodeMessage` api convert
-- update `SessionSender.send()`  change the input parameter `Message` to `CloudEvent`
-- update `MeshMQProducer` api support send `CloudEvents` in runtime
-- support the implementation in `connector-plugin` for send `CloudEvents` to EventStore
-
-##### Server side for subscribe
-
-- support change the `RocketMessage` to `CloudEvent` in connector-plugin
-
-- overwrite the `AsyncMessageListener.consume()` function, change the input parameter `Message` to `CloudEvent`
-
-- update the `MeshMQPushConsumer.updateOffset()` implementation change the the input parameter `Message` to `CloudEvent`
-
-- update `DownStreamMsgContext` , change the input parameter `Message` to `CloudEvent`, update the `DownStreamMsgContext.ackMsg`
-
-#### For HTTP
-
-##### SDK side for publish
-
-- support `LiteProducer.publish(cloudEvent)`
-- add the CloudEvents identifier in http request header
-
-##### SDK side for subscribe
-
-##### Server side for publish
-
-- support build the `HttpCommand.body` by pluggable protocol plugins according the protocol type in `HttpCommand` header
-- support publish the CloudEvent in message processors
-
-##### Server side for subscribe
-
-- update the `EventMeshConsumer.subscribe()`
-
-- update `HandleMsgContext` , change the input parameter `Message` to `CloudEvent`
-- update `AsyncHttpPushRequest.tryHTTPRequest()`
-
-## Appendix
-
-### References
-
-- <https://cloudevents.github.io/sdk-java/kafka>
diff --git a/docs/design-document/08-spi.md b/docs/design-document/08-spi.md
index c8b333d..94da666 100644
--- a/docs/design-document/08-spi.md
+++ b/docs/design-document/08-spi.md
@@ -1,4 +1,4 @@
-# SPI
+# Service Provider Interface
 
 ## Introduction
 
diff --git a/docs/introduction.md b/docs/introduction.md
deleted file mode 100644
index df6eb30..0000000
--- a/docs/introduction.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-sidebar_position: 0
----
-
-# Introduction to EventMesh
-
-[![CI status](https://img.shields.io/github/workflow/status/apache/incubator-eventmesh/Continuous%20Integration?logo=github&style=for-the-badge)](https://github.com/apache/incubator-eventmesh/actions/workflows/ci.yml)
-[![CodeCov](https://img.shields.io/codecov/c/gh/apache/incubator-eventmesh/master?logo=codecov&style=for-the-badge)](https://codecov.io/gh/apache/incubator-eventmesh)
-[![Code Quality: Java](https://img.shields.io/lgtm/grade/java/g/apache/incubator-eventmesh.svg?logo=lgtm&logoWidth=18&style=for-the-badge)](https://lgtm.com/projects/g/apache/incubator-eventmesh/context:java)
-[![Total Alerts](https://img.shields.io/lgtm/alerts/g/apache/incubator-eventmesh.svg?logo=lgtm&logoWidth=18&style=for-the-badge)](https://lgtm.com/projects/g/apache/incubator-eventmesh/alerts/)
-[![License](https://img.shields.io/github/license/apache/incubator-eventmesh?style=for-the-badge)](https://www.apache.org/licenses/LICENSE-2.0.html)
-[![GitHub Release](https://img.shields.io/github/v/release/apache/eventmesh?style=for-the-badge)](https://github.com/apache/incubator-eventmesh/releases)
-[![Slack Status](https://img.shields.io/badge/slack-join_chat-blue.svg?logo=slack&style=for-the-badge)](https://join.slack.com/t/apacheeventmesh/shared_invite/zt-16y1n77va-q~JepYy3RqpkygDYmQaQbw)
-
-**Apache EventMesh (Incubating)** is a dynamic event-driven application runtime used to decouple the application and backend middleware layer, which supports a wide range of use cases that encompass complex multi-cloud, widely distributed topologies using diverse technology stacks.
-
-## Features
-
-[//]: # ()
-[//]: # ()
-[//]: # (### Multi-Runtime Architecture)
-
-[//]: # ()
-[//]: # (![EventMesh Architecture]&#40;docs/images/eventmesh-architecture.png&#41;)
-
-[//]: # ()
-[//]: # (### Orchestration)
-
-[//]: # ()
-[//]: # (![EventMesh Orchestration]&#40;docs/images/eventmesh-orchestration.png&#41;)
-
-[//]: # ()
-[//]: # (### Data Mesh)
-
-[//]: # ()
-[//]: # (![EventMesh Data Mesh]&#40;docs/images/eventmesh-bridge.png&#41;)
-
-- **Communication Protocol**: EventMesh could communicate with clients with TCP, HTTP, or gRPC.
-- **CloudEvents**: EventMesh supports the [CloudEvents](https://cloudevents.io) specification as the format of the events. CloudEvents is a specification for describing event data in common formats to provide interoperability across services, platforms, and systems.
-- **Schema Registry**: EventMesh implements a schema registry that receives and stores schemas from clients and provides an interface for other clients to retrieve schemas.
-- **Observability**: EventMesh exposed a range of metrics, such as the average latency of the HTTP protocol and the number of delivered messages. The metrics could be collected and analyzed with Prometheus or OpenTelemetry.
-- **Event Workflow Orchestration**: EventMesh Workflow could receive an event and decide which command to trigger next based on the workflow definitions and the current workflow state. The workflow definition could be written with the [Serverless Workflow](https://serverlessworkflow.io) DSL.
-
-## Components
-
-Apache EventMesh (Incubating) consists of multiple components that integrate different middlewares and messaging protocols to enhance the functionalities of the application runtime.
-
-- **eventmesh-runtime**: The middleware that transmits events between producers and consumers, which supports cloud-native apps and microservices.
-- **eventmesh-sdk-java**: The Java SDK that supports HTTP, TCP, and [gRPC](https://grpc.io) protocols.
-- **eventmesh-sdk-go**: The Golang SDK that supports HTTP, TCP, and [gRPC](https://grpc.io) protocols.
-- **eventmesh-connector-plugin**: The collection of plugins that connects middlewares such as [Apache RocketMQ](https://rocketmq.apache.org) (implemented) [Apache Kafka](https://kafka.apache.org) (in progress), [Apache Pulsar](https://pulsar.apache.org/) (in progress), and [Redis](https://redis.io) (in progress).
-- **eventmesh-registry-plugin**: The collection of plugins that integrate service registries such as [Nacos](https://nacos.io) and [etcd](https://etcd.io).
-- **eventmesh-security-plugin**: The collection of plugins that implement security mechanisms, such as ACL (access control list), authentication, and authorization.
-- **eventmesh-protocol-plugin**: The collection of plugins that implement messaging protocols, such as [CloudEvents](https://cloudevents.io) and [MQTT](https://mqtt.org).
-- **eventmesh-admin**: The control plane that manages clients, topics, and subscriptions.
-
diff --git a/docs/metrics-tracing/01-prometheus.md b/docs/metrics-tracing/01-prometheus.md
deleted file mode 100644
index 1bc6e0e..0000000
--- a/docs/metrics-tracing/01-prometheus.md
+++ /dev/null
@@ -1,24 +0,0 @@
-# Observe Metrics with Prometheus
-
-## Prometheus
-
-[Prometheus](https://prometheus.io/docs/introduction/overview/) is an open-source system monitoring and alerting toolkit that collects and stores the metrics as time-series data. EventMesh exposes a collection of metrics data that could be scraped and analyzed by Prometheus. Please follow [the "First steps with Prometheus" tutorial](https://prometheus.io/docs/introduction/first_steps/) to download and install the latest release of Prometheus.
-
-## Edit Prometheus Configuration
-
-The `eventmesh-runtime/conf/prometheus.yml` configuration file specifies the port of the metrics HTTP endpoint. The default metrics port is `19090`.
-
-```properties
-eventMesh.metrics.prometheus.port=19090
-```
-
-Please refer to [the Prometheus configuration guide](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) to add the EventMesh metrics as a scrape target in the configuration file. Here's the minimum configuration that creates a job with the name `eventmesh` and the endpoint `http://localhost:19090`:
-
-```yaml
-scrape_configs:
-  - job_name: "eventmesh"
-    static_configs:
-      - targets: ["localhost:19090"]
-```
-
-Please navigate to the Prometheus dashboard (e.g. `http://localhost:9090`) to view the list of metrics exported by EventMesh, which are prefixed with `eventmesh_`.
diff --git a/docs/metrics-tracing/02-zipkin.md b/docs/metrics-tracing/02-zipkin.md
deleted file mode 100644
index f4d0725..0000000
--- a/docs/metrics-tracing/02-zipkin.md
+++ /dev/null
@@ -1,38 +0,0 @@
-# Collect Trace with Zipkin
-
-## Zipkin
-
-Distributed tracing is a method used to profile and monitor applications built with microservices architecture. Distributed tracing helps pinpoint where failures occur and what causes poor performance.
-
-[Zipkin](https://zipkin.io) is a distributed tracing system that helps collect timing data needed to troubleshoot latency problems in service architectures. EventMesh exposes a collection of trace data that could be collected and analyzed by Zipkin. Please follow [the "Zipkin Quickstart" tutorial](https://zipkin.io/pages/quickstart.html) to download and install the latest release of Zipkin.
-
-## Configuration
-
-To enable the trace exporter of EventMesh Runtime, set the `eventMesh.server.trace.enabled` field in the `conf/eventmesh.properties` file to `true`.
-
-```conf
-# Trace plugin
-eventMesh.server.trace.enabled=true
-eventMesh.trace.plugin=zipkin
-```
-
-To customize the behavior of the trace exporter such as timeout or export interval, edit the `exporter.properties` file.
-
-```conf
-# Set the maximum batch size to use
-eventmesh.trace.max.export.size=512
-# Set the queue size. This must be >= the export batch size
-eventmesh.trace.max.queue.size=2048
-# Set the max amount of time an export can run before getting(TimeUnit=SECONDS)
-eventmesh.trace.export.timeout=30
-# Set time between two different exports (TimeUnit=SECONDS)
-eventmesh.trace.export.interval=5
-```
-
-To send the exported trace data to Zipkin, edit the `eventmesh.trace.zipkin.ip` and `eventmesh.trace.zipkin.port` fields in the `conf/zipkin.properties` file to match the configuration of the Zipkin server.
-
-```conf
-# Zipkin's IP and Port
-eventmesh.trace.zipkin.ip=localhost
-eventmesh.trace.zipkin.port=9411
-```
diff --git a/docs/metrics-tracing/_category_.json b/docs/metrics-tracing/_category_.json
deleted file mode 100644
index 72c4414..0000000
--- a/docs/metrics-tracing/_category_.json
+++ /dev/null
@@ -1,5 +0,0 @@
-{
-  "position": 4,
-  "label": "Metrics and Tracing",
-  "collapsed": false
-}
diff --git a/docs/roadmap.md b/docs/roadmap.md
index c0dea20..e07a3a6 100644
--- a/docs/roadmap.md
+++ b/docs/roadmap.md
@@ -4,7 +4,7 @@ sidebar_position: 1
 
 # Development Roadmap
 
-The development roadmap of Apache EventMesh (Incubating) is an overview of the planned features and milestones involved in the next several releases. The recent features and bug fixes are documented in the [release notes](https://eventmesh.apache.org/events/release-notes/v1.5.0). The order of the features listed below doesn't correspond to their priorities.
+The development roadmap of Apache EventMesh (Incubating) is an overview of the planned features and milestones involved in the next several releases. The recent features and bug fixes are documented in the [release notes](https://eventmesh.apache.org/events/release-notes/v1.4.0). The order of the features listed below doesn't correspond to their priorities.
 
 ## List of Features and Milestones
 
@@ -25,23 +25,26 @@ The development roadmap of Apache EventMesh (Incubating) is an overview of the p
 | **Implemented in 1.5.0**                  | Support Nacos Registry          | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
 | **Implemented in 1.5.0**                  | Support Mesh Bridge             | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
 | **Implemented in 1.5.0**                  | Support  Federal Government     | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
-| **Implemented in 1.6.0 (to be released)** | Integrate with Consul           | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
-| **Implemented in 1.6.0 (to be released)** | Support Webhook                 | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
-| **Implemented in 1.6.0 (to be released)** | Support ETCD                    | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
-| **In Progress**                           | Knative Eventing Infrastructure | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/790), [GSoC '22](https://issues.apache.org/jira/browse/COMDEV-463) |
-| **In Progress**                           | Dashboard                       | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/700), [GSoC '22](https://issues.apache.org/jira/browse/COMDEV-465) |
-| **In Progress**                           | Support Kafka as EventStore     | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/676) |
-| **In Progress**                           | Support Pulsar as EventStore    | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/676) |
-| **In Progress**                           | Support Dledger                 | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
-| **In Progress**                           | Workflow                        | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
-| **In Progress**                           | Support Redis                   | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
-| **In Progress**                           | Support Mesh Bridge             | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
-| **In Progress**                           | Support Zookeeper               | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
+| **Implemented in 1.5.0**                  | Support Mesh Bridge             | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
+| **Implemented in 1.6.0**                  | Integrate with Consul           | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
+| **Implemented in 1.6.0**                  | Support Webhook                 | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
+| **Implemented in 1.6.0**                  | Support etcd                    | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
+| **Implemented in 1.7.0**                           | Support Knative Eventing Infrastructure | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/790), [GSoC '22](https://issues.apache.org/jira/browse/COMDEV-463) |
+| **Implemented in 1.7.0**                           | Support Pravega as EventStore   | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/270)  |
+| **Implemented in 1.7.0**                           | Support Kafka as EventStore     | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/676) |
+| **Implemented in 1.7.0**                           | Support Pulsar as EventStore    | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/676) |
+| **Implemented in 1.7.0**                           | Support CNCF Serverless Workflow| [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
+| **Implemented in 1.7.0**                           | Support Redis                   | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
+| **Implemented in 1.7.0**                           | Provide Rust SDK                        | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/815) |
+| **Implemented in 1.7.0**                           | Support Zookeeper               | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
+| **Implemented in 1.7.0**                           | Support RabbitMQ as EventStore               | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/1553) |
+| **In Progress**                           | Provide Dashboard                       | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/700), [GSoC '22](https://issues.apache.org/jira/browse/COMDEV-465)
+| **In Progress**                           | Support Filter Chain                    | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/664) |
+| **In Progress**                           | Metadata consistency persistent | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/817)  |
+| Planned                                   | Support Dledger                 | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
 | Planned                                   | Provide NodeJS SDK              | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/417) |
+| Planned                                   | Provide PHP    SDK              | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/1193) |
 | Planned                                   | Transaction Event               | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/697) |
 | Planned                                   | Event Query Language (EQL)      | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/778) |
-| Planned                                   | Metadata consistency persistent | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/817)  |
-| Planned                                   | Rust SDK                        | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/815) |
 | Planned                                   | WebAssembly Runtime             | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/576) |
-| Planned                                   | Filter Chain                    | [GitHub Issue](https://github.com/apache/incubator-eventmesh/issues/664) |
 
diff --git a/docs/sdk-java/02-http.md b/docs/sdk-java/02-http.md
index eb71cb5..bf2e5c6 100644
--- a/docs/sdk-java/02-http.md
+++ b/docs/sdk-java/02-http.md
@@ -92,3 +92,24 @@ public class HTTP {
   }
 }
 ```
+
+## Using Curl Command
+
+You can also publish/subscribe event without eventmesh SDK.
+
+### Publish
+
+```shell
+curl -H "Content-Type:application/json" -X POST -d '{"name": "admin", "pass":"12345678"}' http://127.0.0.1:10105/eventmesh/publish/TEST-TOPIC-HTTP-ASYNC
+```
+
+After you start the eventmesh runtime server, you can use the curl command publish the event to the specific topic with the HTTP POST method and the package body must be in JSON format. The publish url like (http://127.0.0.1:10105/eventmesh/publish/TEST-TOPIC-HTTP-ASYNC), and you will get the publish successful result.
+
+### Subscribe
+
+```shell
+curl -H "Content-Type:application/json" -X POST -d '{"url": "http://127.0.0.1:8088/sub/test", "consumerGroup":"TEST-GROUP", "topic":[{"mode":"CLUSTERING","topic":"TEST-TOPIC-HTTP-ASYNC","type":"ASYNC"}]}' http://127.0.0.1:10105/eventmesh/subscribe/local
+```
+
+After you start the eventmesh runtime server, you can use the curl command to subscribe the specific topic list with the HTTP POST method, and the package body must be in JSON format. The subscribe url like (http://127.0.0.1:10105/eventmesh/subscribe/local), and you will get the subscribe successful result. You should pay attention to the `url` field in the package body, which means you need to set up an HTTP service at the specified URL, you can see the example in the `eventmesh-example [...]
+


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@eventmesh.apache.org
For additional commands, e-mail: commits-help@eventmesh.apache.org