You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@servicecomb.apache.org by ni...@apache.org on 2019/09/30 10:09:57 UTC

[servicecomb-pack] branch master updated (6d5bef9 -> ed6fa9d)

This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git.


    from 6d5bef9  SagaStartAnnotationProcessorTimeoutWrapper handle the autoClose is false case
     new a68d398  SCB-1368 Add akka cluster dependency
     new 711eebc  SCB-1368 Add akka cluster property adapter
     new bfe4e11  SCB-1368 Allows the use of system variables -Dlog-file-name=xxxx.log to define log file name
     new 001a26d  SCB-1368 Refactoring model alpha-fsm-channel-kafka and alpha-fsm-channel-redis to alpha-fsm
     new 55c4a3a  SCB-1368 Define generic interface
     new 03d86ca  SCB-1368 Refactoring memory channel
     new 4c2b1e4  SCB-1368 Add Akka cluster dependencies
     new 10c54af  SCB-1368 Add shard region selection Actor
     new 1bd3518  SCB-1368 Add default configuration of Akka cluster
     new b5c2d1c  SCB-1368 Indicate sub types of serializable polymorphic types
     new 0373fa7  SCB-1368 Remove persistent queues for reliability
     new 92264de  SCB-1368 Fix log information bug
     new 93fa5f2  SCB-1368 Add default configuration of Akka
     new ae45ed3  SCB-1368 Log4j2 disable the automatic shutdown hook
     new df1c109  SCB-1368 static variable name is written in upper case letters
     new 94a9287  SCB-1368 Polishing
     new f3472e0  SCB-1368 Kafka at-least-once delivery
     new 4fd334a  SCB-1368 Added debug info
     new 2d26788  SCB-1368 Clean up kafka client extra dependencies
     new 6779ea1  SCB-1368 Delete the Actor state persistent data after transaction data is saved successfully
     new 5693b81  SCB-1368 Change ShardRegion Actor name to saga-shard-region-actor
     new 6e409c2  SCB-1368 Added debug info
     new 2639a13  SCB-1368 Ignore akka distributed data local directory
     new 235032b  SCB-1368 Modify ES default batchSize 100
     new eff495d  SCB-1368 Added the globalTxId prefix for concurrent
     new 8ca383f  SCB-1368 Optimize log information
     new b5bb417  SCB-1368 Added parameter description
     new c0224c1  SCB-1368 Added serialVersionUID
     new 72c2ed6  SCB-1368 Ensure message delivery reliability between Kafka and ClusterShardRegion in cluster mode
     new 142ba86  SCB-1368 Optimize log information
     new b1d919d  SCB-1368 The default value of commit-time-warning is changed to 5s
     new 24fd1bf  SCB-1368 Updated document
     new bf2b577  SCB-1368 Use dependency management to define the Kafka version
     new d91304a  SCB-1368 Update test cases for Akka Cluster Sharding
     new 814d2bb  SCB-1368 Update test cases timeout for CI
     new af32706  SCB-1368 Delete useless code
     new b689c07  SCB-1368 Fix metric statistics bug
     new e37775a  SCB-1368 Added the license header.
     new 828908e  SCB-1368 disable JMX over HTTP
     new 80fb4b6  SCB-1368 Fix metric statistics bug
     new ddecff7  SCB-1368 Update test cases timeout for CI
     new ed6fa9d  SCB-1368 Added null protection logic

The 42 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .gitignore                                         |   3 +
 .../org/apache/servicecomb/pack/PackStepdefs.java  |   2 +-
 .../pack/alpha/benchmark/SagaEventBenchmark.java   |  34 +-
 .../alpha/core/fsm/channel/MessagePublisher.java   |   4 +-
 .../pack/alpha/core/fsm/event/base/BaseEvent.java  |  41 +-
 .../pack/alpha/core/metrics/MetricsBean.java       |   7 +-
 .../pack/alpha/core/metrics/MetricsBeanTest.java   |  74 +++
 alpha/alpha-fsm-channel-kafka/README.md            |  28 --
 alpha/alpha-fsm-channel-kafka/pom.xml              | 113 -----
 .../kafka/KafkaChannelAutoConfiguration.java       | 175 -------
 .../fsm/channel/kafka/KafkaMessageListener.java    |  49 --
 .../src/main/resources/META-INF/spring.factories   |  17 -
 .../channel/kafka/test/KafkaActorEventSink.java    |  31 --
 .../fsm/channel/kafka/test/KafkaApplication.java   |  40 --
 .../fsm/channel/kafka/test/KafkaChannelTest.java   |  95 ----
 .../src/test/resources/log4j2.xml                  |  30 --
 alpha/alpha-fsm-channel-redis/README.md            |  17 -
 alpha/alpha-fsm-channel-redis/pom.xml              |  99 ----
 .../src/main/resources/META-INF/spring.factories   |  17 -
 .../pack/alpha/fsm/RedisChannelTest.java           | 130 -----
 .../servicecomb/pack/alpha/fsm/RedisEventSink.java |  32 --
 .../src/test/resources/log4j2.xml                  |  30 --
 alpha/alpha-fsm/pom.xml                            |  53 ++-
 .../pack/alpha/fsm/FsmAutoConfiguration.java       |  78 +--
 .../servicecomb/pack/alpha/fsm/SagaActor.java      | 287 ++++++-----
 .../pack/alpha/fsm/SagaShardRegionActor.java       |  97 ++++
 .../fsm/channel/AbstractActorEventChannel.java     |   3 -
 .../alpha/fsm/channel/AbstractEventConsumer.java   |  20 +
 .../{ => kafka}/KafkaActorEventChannel.java        |  14 +-
 .../kafka/KafkaChannelAutoConfiguration.java       | 150 ++++++
 .../fsm/channel/kafka/KafkaMessagePublisher.java   |  22 +-
 .../fsm/channel/kafka/KafkaSagaEventConsumer.java  | 107 +++++
 .../{ => memory}/MemoryActorEventChannel.java      |  33 +-
 .../memory/MemoryChannelAutoConfiguration.java     |  62 +++
 .../MemorySagaEventConsumer.java}                  |  48 +-
 .../alpha/fsm/channel/redis/MessageSerializer.java |  27 +-
 .../{ => redis}/RedisActorEventChannel.java        |  18 +-
 .../redis/RedisChannelAutoConfiguration.java       |  49 +-
 .../fsm/channel/redis/RedisMessagePublisher.java   |  15 +-
 .../fsm/channel/redis/RedisSagaEventConsumer.java} |  50 +-
 .../DefaultTransactionRepositoryChannel.java}      |  30 +-
 .../MemoryTransactionRepositoryChannel.java        |  71 ---
 .../ElasticsearchTransactionRepository.java        |   8 +-
 .../pack/alpha/fsm/sink/SagaActorEventSender.java  |  82 ----
 .../integration/akka/AkkaClusterListener.java      |  80 ++++
 .../akka/AkkaConfigPropertyAdapter.java            |  26 +-
 .../servicecomb/pack/alpha/fsm/SagaActorTest.java  |   7 +-
 .../pack/alpha/fsm/SagaIntegrationTest.java        |  60 +--
 .../alpha-fsm/src/test/resources/application.yaml  | 205 ++++++++
 alpha/alpha-server/pom.xml                         |   4 +
 .../alpha/server/fsm/FsmSagaDataController.java    |  19 +-
 .../src/main/resources/application.yaml            | 112 ++++-
 alpha/alpha-server/src/main/resources/log4j2.xml   |   9 +-
 alpha/pom.xml                                      |   2 -
 docs/fsm/akka_zh.md                                |  69 +++
 docs/fsm/apis_zh.md                                | 132 ++++++
 docs/fsm/assets/alpha-cluster-architecture.png     | Bin 0 -> 537656 bytes
 docs/fsm/eventchannel_zh.md                        |  34 ++
 docs/fsm/fsm_manual.md                             | 300 ++++++++++++
 docs/fsm/fsm_manual_zh.md                          | 199 +++++---
 docs/fsm/how_to_use_fsm.md                         | 522 --------------------
 docs/fsm/how_to_use_fsm_zh.md                      | 527 ---------------------
 docs/fsm/persistence_zh.md                         | 235 +++++++++
 docs/user_guide.md                                 |   2 +-
 docs/user_guide_zh.md                              |   2 +-
 pom.xml                                            |  44 ++
 66 files changed, 2363 insertions(+), 2619 deletions(-)
 create mode 100644 alpha/alpha-core/src/test/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBeanTest.java
 delete mode 100644 alpha/alpha-fsm-channel-kafka/README.md
 delete mode 100644 alpha/alpha-fsm-channel-kafka/pom.xml
 delete mode 100644 alpha/alpha-fsm-channel-kafka/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java
 delete mode 100644 alpha/alpha-fsm-channel-kafka/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessageListener.java
 delete mode 100644 alpha/alpha-fsm-channel-kafka/src/main/resources/META-INF/spring.factories
 delete mode 100644 alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaActorEventSink.java
 delete mode 100644 alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaApplication.java
 delete mode 100644 alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaChannelTest.java
 delete mode 100644 alpha/alpha-fsm-channel-kafka/src/test/resources/log4j2.xml
 delete mode 100644 alpha/alpha-fsm-channel-redis/README.md
 delete mode 100644 alpha/alpha-fsm-channel-redis/pom.xml
 delete mode 100644 alpha/alpha-fsm-channel-redis/src/main/resources/META-INF/spring.factories
 delete mode 100644 alpha/alpha-fsm-channel-redis/src/test/java/org/apache/servicecomb/pack/alpha/fsm/RedisChannelTest.java
 delete mode 100644 alpha/alpha-fsm-channel-redis/src/test/java/org/apache/servicecomb/pack/alpha/fsm/RedisEventSink.java
 delete mode 100644 alpha/alpha-fsm-channel-redis/src/test/resources/log4j2.xml
 create mode 100644 alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
 create mode 100644 alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/AbstractEventConsumer.java
 rename alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/{ => kafka}/KafkaActorEventChannel.java (67%)
 create mode 100644 alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java
 rename alpha/{alpha-fsm-channel-kafka => alpha-fsm}/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java (70%)
 create mode 100644 alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java
 copy alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/{ => memory}/MemoryActorEventChannel.java (70%)
 create mode 100644 alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/memory/MemoryChannelAutoConfiguration.java
 rename alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/{MemoryActorEventChannel.java => memory/MemorySagaEventConsumer.java} (53%)
 rename alpha/{alpha-fsm-channel-redis => alpha-fsm}/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/MessageSerializer.java (84%)
 rename alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/{ => redis}/RedisActorEventChannel.java (75%)
 rename alpha/{alpha-fsm-channel-redis => alpha-fsm}/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisChannelAutoConfiguration.java (69%)
 rename alpha/{alpha-fsm-channel-redis => alpha-fsm}/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisMessagePublisher.java (75%)
 rename alpha/{alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisMessageSubscriber.java => alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisSagaEventConsumer.java} (54%)
 rename alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/{channel/ActiveMQActorEventChannel.java => repository/channel/DefaultTransactionRepositoryChannel.java} (54%)
 delete mode 100644 alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/channel/MemoryTransactionRepositoryChannel.java
 delete mode 100644 alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/sink/SagaActorEventSender.java
 create mode 100644 alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/AkkaClusterListener.java
 create mode 100644 alpha/alpha-fsm/src/test/resources/application.yaml
 create mode 100644 docs/fsm/akka_zh.md
 create mode 100644 docs/fsm/apis_zh.md
 create mode 100644 docs/fsm/assets/alpha-cluster-architecture.png
 create mode 100644 docs/fsm/eventchannel_zh.md
 create mode 100755 docs/fsm/fsm_manual.md
 delete mode 100644 docs/fsm/how_to_use_fsm.md
 delete mode 100644 docs/fsm/how_to_use_fsm_zh.md
 create mode 100644 docs/fsm/persistence_zh.md


[servicecomb-pack] 23/42: SCB-1368 Ignore akka distributed data local directory

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 2639a13f6fafdb38ed705e61f6c5c09d0b0276be
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 21 01:17:36 2019 +0800

    SCB-1368 Ignore akka distributed data local directory
---
 .gitignore                |  3 ++
 docs/fsm/fsm_manual_zh.md | 84 +++++++++++++++++++++++++++++++++++++----------
 2 files changed, 69 insertions(+), 18 deletions(-)

diff --git a/.gitignore b/.gitignore
index 6176feb..621755d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -36,3 +36,6 @@ target/
 
 #skip the maven-wrapper.jar
 .mvn/wrapper/maven-wrapper.jar
+
+#akka distributed data
+ddata-*
diff --git a/docs/fsm/fsm_manual_zh.md b/docs/fsm/fsm_manual_zh.md
index dcaeada..6368e0e 100755
--- a/docs/fsm/fsm_manual_zh.md
+++ b/docs/fsm/fsm_manual_zh.md
@@ -39,11 +39,11 @@ ServiceComb Pack 0.5.0 开始支持 Saga 状态机模式,你只需要在启动
     --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
     --spring.datasource.username=saga \
     --spring.datasource.password=password \
-    --spring.profiles.active=prd \
     --alpha.feature.akka.enabled=true \
     --alpha.feature.akka.transaction.repository.type=elasticsearch \
     --spring.data.elasticsearch.cluster-name=docker-cluster \
-    --spring.data.elasticsearch.cluster-nodes=localhost:9300  
+    --spring.data.elasticsearch.cluster-nodes=localhost:9300 \
+    --spring.profiles.active=prd  
   ```
 
 * Alpha WEB 管理界面
@@ -139,6 +139,42 @@ Sub Transactions 面板:本事务包含的子事务ID,子事务状态,子
 
 ## 集群
 
+参数
+
+| 参数名                     | 参数值 | 说明 |
+| -------------------------- | ------ | ---- |
+| server.port                | 8090   |      |
+| alpha.server.port          | 8080   |      |
+| alpha.feature.akka.enabled | true   |      |
+
+参数
+
+| 参数名                          | 参数值            | 说明 |
+| ------------------------------- | ----------------- | ---- |
+| alpha.feature.akka.channel.type | kafka             |      |
+| spring.kafka.bootstrap-servers  | 192.168.1.10:9092 |      |
+|                                 |                   |      |
+
+持久化参数
+
+| 参数名                                         | 参数值         | 说明 |
+| ---------------------------------------------- | -------------- | ---- |
+| alpha.feature.akka.transaction.repository.type | elasticsearch  |      |
+| spring.data.elasticsearch.cluster-name         | docker-cluster |      |
+| spring.data.elasticsearch.cluster-nodes        | localhost:9300 |      |
+
+Akka
+
+| 参数名                                            | 参数值                          | 说明 |
+| ------------------------------------------------- | ------------------------------- | ---- |
+| akkaConfig.akka.persistence.journal.plugin        | akka-persistence-redis.journal  |      |
+| akkaConfig.akka.persistence.snapshot-store.plugin | akka-persistence-redis.snapshot |      |
+| akkaConfig.akka-persistence-redis.redis.mode      | simple                          |      |
+| akkaConfig.akka-persistence-redis.redis.host      | localhost                       |      |
+| akkaConfig.akka-persistence-redis.redis.port      | 6379                            |      |
+| akkaConfig.akka-persistence-redis.redis.database  | 0                               |      |
+|                                                   |                                 |      |
+
 可以通过部署多个 Alpha 实现处理能力的水平扩展,集群依赖 Kafka 服务。
 
 * 启动 Kafka,可以使用 docker compose 方式启动,以下是一个 compose 文件样例
@@ -169,36 +205,44 @@ Sub Transactions 面板:本事务包含的子事务ID,子事务状态,子
 
   ```bash
   java -jar alpha-server-${version}-exec.jar \
-    --server.port=8090
-    --alpha.server.port=8080
+    --server.port=8090 \
+    --server.host=127.0.0.1 \
+    --alpha.server.port=8080 \
     --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
     --spring.datasource.username=saga \
     --spring.datasource.password=password \
-    --spring.profiles.active=prd \
-    --alpha.feature.akka.enabled=true \
-    --alpha.feature.akka.transaction.repository.type=elasticsearch \
+    --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
     --spring.data.elasticsearch.cluster-name=docker-cluster \
-    --spring.data.elasticsearch.cluster-nodes=localhost:9300 \
-    --alpha.feature.akka.channel.type=kafka \
-    --spring.kafka.bootstrap-servers=192.168.1.10:9092
+    --spring.data.elasticsearch.cluster-nodes=127.0.0.1:9300 \
+    --akkaConfig.akka.remote.artery.canonical.port=8070 \
+    --akkaConfig.akka.cluster.seed-nodes[0]="akka://alpha-cluster@127.0.0.1:8070" \
+    --akkaConfig.akka-persistence-redis.redis.mode=simple \
+    --akkaConfig.akka-persistence-redis.redis.host=127.0.0.1 \
+    --akkaConfig.akka-persistence-redis.redis.port=6379 \
+    --akkaConfig.akka-persistence-redis.redis.database=0 \
+    --spring.profiles.active=prd,cluster
   ```
 
   启动 Alpha 2
 
   ```bash
   java -jar alpha-server-${version}-exec.jar \
-    --server.port=8091
-    --alpha.server.port=8081
+    --server.port=8091 \
+    --server.host=127.0.0.1 \
+    --alpha.server.port=8081 \
     --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
     --spring.datasource.username=saga \
     --spring.datasource.password=password \
-    --spring.profiles.active=prd \
-    --alpha.feature.akka.enabled=true \
-    --alpha.feature.akka.transaction.repository.type=elasticsearch \
+    --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
     --spring.data.elasticsearch.cluster-name=docker-cluster \
-    --spring.data.elasticsearch.cluster-nodes=localhost:9300 \
-    --alpha.feature.akka.channel.type=kafka \
-    --spring.kafka.bootstrap-servers=192.168.1.10:9092
+    --spring.data.elasticsearch.cluster-nodes=127.0.0.1:9300 \
+    --akkaConfig.akka.remote.artery.canonical.port=8071 \
+    --akkaConfig.akka.cluster.seed-nodes[0]="akka://alpha-cluster@127.0.0.1:8070" \
+    --akkaConfig.akka-persistence-redis.redis.mode=simple \
+    --akkaConfig.akka-persistence-redis.redis.host=127.0.0.1 \
+    --akkaConfig.akka-persistence-redis.redis.port=6379 \
+    --akkaConfig.akka-persistence-redis.redis.database=0 \
+    --spring.profiles.active=prd,cluster
   ```
 
   集群参数说明
@@ -211,6 +255,10 @@ Sub Transactions 面板:本事务包含的子事务ID,子事务状态,子
 
   spring.kafka.bootstrap-servers: kafka 地址,多个地址逗号分隔
 
+  
+
+
+
 ## 后续计划
 
 Akka集群支持


[servicecomb-pack] 34/42: SCB-1368 Update test cases for Akka Cluster Sharding

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit d91304a2554fb4ed11124925a85ed0e546fa2fd9
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Fri Sep 27 19:44:23 2019 +0800

    SCB-1368 Update test cases for Akka Cluster Sharding
---
 .../servicecomb/pack/alpha/fsm/SagaActor.java      |  23 +--
 .../pack/alpha/fsm/SagaShardRegionActor.java       |  18 +-
 .../servicecomb/pack/alpha/fsm/SagaActorTest.java  |   1 -
 .../pack/alpha/fsm/SagaIntegrationTest.java        |   7 +-
 .../alpha-fsm/src/test/resources/application.yaml  | 205 +++++++++++++++++++++
 5 files changed, 232 insertions(+), 22 deletions(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
index e64b82d..b7e0a1f 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
@@ -56,19 +56,20 @@ public class SagaActor extends
     AbstractPersistentFSM<SagaActorState, SagaData, DomainEvent> {
 
   private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
+  private String persistenceId;
+  private long sagaBeginTime;
+  private long sagaEndTime;
 
   public static Props props(String persistenceId) {
     return Props.create(SagaActor.class, persistenceId);
   }
 
-  private String persistenceId;
-
-  private long sagaBeginTime;
-  private long sagaEndTime;
-
-  public SagaActor() {
-    this.persistenceId = getSelf().path().name();
+  public SagaActor(String persistenceId) {
+    if (persistenceId != null) {
+      this.persistenceId = persistenceId;
+    } else {
+      this.persistenceId = getSelf().path().name();
+    }
 
     startWith(SagaActorState.IDLE, SagaData.builder().build());
 
@@ -487,14 +488,14 @@ public class SagaActor extends
             }
           });
         } else if (domainEvent.getState() == SagaActorState.SUSPENDED) {
-          data.setEndTime(event.getEvent().getCreateTime());
+          data.setEndTime(event.getEvent() != null ? event.getEvent().getCreateTime() : new Date());
           data.setTerminated(true);
           data.setSuspendedType(domainEvent.getSuspendedType());
         } else if (domainEvent.getState() == SagaActorState.COMPENSATED) {
-          data.setEndTime(event.getEvent().getCreateTime());
+          data.setEndTime(event.getEvent() != null ? event.getEvent().getCreateTime() : new Date());
           data.setTerminated(true);
         } else if (domainEvent.getState() == SagaActorState.COMMITTED) {
-          data.setEndTime(event.getEvent().getCreateTime());
+          data.setEndTime(event.getEvent() != null ? event.getEvent().getCreateTime() : new Date());
           data.setTerminated(true);
         }
       }
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
index daa4ee4..5d0d6d8 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
@@ -70,7 +70,7 @@ public class SagaShardRegionActor extends AbstractActor {
     sagaActorRegion = ClusterSharding.get(system)
         .start(
             SagaActor.class.getSimpleName(),
-            Props.create(SagaActor.class),
+            SagaActor.props(null),
             settings,
             messageExtractor);
   }
@@ -79,14 +79,16 @@ public class SagaShardRegionActor extends AbstractActor {
   public Receive createReceive() {
     return receiveBuilder()
         .matchAny(event -> {
-          final BaseEvent evt = (BaseEvent) event;
-          if (LOG.isDebugEnabled()) {
-            LOG.debug("=> [{}] {} {}", evt.getGlobalTxId(), evt.getType(), evt.getLocalTxId());
-          }
+          if(event instanceof BaseEvent){
+            final BaseEvent evt = (BaseEvent) event;
+            if (LOG.isDebugEnabled()) {
+              LOG.debug("=> [{}] {} {}", evt.getGlobalTxId(), evt.getType(), evt.getLocalTxId());
+            }
 
-          sagaActorRegion.tell(event, getSelf());
-          if (LOG.isDebugEnabled()) {
-            LOG.debug("<= [{}] {} {}", evt.getGlobalTxId(), evt.getType(), evt.getLocalTxId());
+            sagaActorRegion.tell(event, getSelf());
+            if (LOG.isDebugEnabled()) {
+              LOG.debug("<= [{}] {} {}", evt.getGlobalTxId(), evt.getType(), evt.getLocalTxId());
+            }
           }
           getSender().tell("confirm", getSelf());
         })
diff --git a/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaActorTest.java b/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaActorTest.java
index e32fbfe..2fe812e 100644
--- a/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaActorTest.java
+++ b/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaActorTest.java
@@ -228,7 +228,6 @@ public class SagaActorTest {
           SagaActorState.PARTIALLY_ACTIVE);
 
       //expectTerminated(fsm);
-
       ActorRef recoveredSaga = system.actorOf(SagaActor.props(persistenceId), "recoveredSaga");
       watch(recoveredSaga);
       recoveredSaga.tell(new PersistentFSM.SubscribeTransitionCallBack(getRef()), getRef());
diff --git a/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java b/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java
index 03a89b0..508be9f 100644
--- a/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java
+++ b/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java
@@ -83,8 +83,11 @@ public class SagaIntegrationTest {
       memoryActorEventChannel.send(event);
     });
     await().atMost(2, SECONDS).until(() -> {
-      SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
-      return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()== SagaActorState.COMMITTED;
+      SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system)
+          .getLastSagaData();
+      return sagaData != null && sagaData.isTerminated()
+          && sagaData.getLastState() == SagaActorState.COMMITTED
+          && metricsService.metrics().getSagaEndCounter() == 1;
     });
     SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
     assertNotNull(sagaData.getBeginTime());
diff --git a/alpha/alpha-fsm/src/test/resources/application.yaml b/alpha/alpha-fsm/src/test/resources/application.yaml
new file mode 100644
index 0000000..4f202be
--- /dev/null
+++ b/alpha/alpha-fsm/src/test/resources/application.yaml
@@ -0,0 +1,205 @@
+## ---------------------------------------------------------------------------
+## Licensed to the Apache Software Foundation (ASF) under one or more
+## contributor license agreements.  See the NOTICE file distributed with
+## this work for additional information regarding copyright ownership.
+## The ASF licenses this file to You under the Apache License, Version 2.0
+## (the "License"); you may not use this file except in compliance with
+## the License.  You may obtain a copy of the License at
+##
+##      http://www.apache.org/licenses/LICENSE-2.0
+##
+## Unless required by applicable law or agreed to in writing, software
+## distributed under the License is distributed on an "AS IS" BASIS,
+## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+## See the License for the specific language governing permissions and
+## limitations under the License.
+## ---------------------------------------------------------------------------
+server:
+  port: 8090
+  host: 0.0.0.0
+
+alpha:
+  server:
+    host: ${server.host}
+    port: 8080
+  feature:
+    akka:
+      enabled: false
+      channel:
+        type: memory
+      transaction:
+        repository:
+          type: elasticsearch
+
+spring:
+  datasource:
+    initialization-mode: always
+  main:
+    allow-bean-definition-overriding: true
+  cloud:
+    consul:
+      host: 0.0.0.0
+      port: 8500
+      discovery:
+        serviceName: {spring.application.name}
+        healthCheckPath: /actuator/health
+        healthCheckInterval: 10s
+        instanceId: ${spring.application.name}-${alpha.server.host}-${random.value}
+        tags: alpha-server-host=${alpha.server.host},alpha-server-port=${alpha.server.port}
+
+eureka:
+  client:
+    enabled: false
+  instance:
+    metadataMap:
+      servicecomb-alpha-server: ${alpha.server.host}:${alpha.server.port}
+
+
+akkaConfig:
+  akka:
+    loglevel: INFO
+    loggers: ["akka.event.slf4j.Slf4jLogger"]
+    logging-filter: akka.event.slf4j.Slf4jLoggingFilter
+    log-dead-letters: off
+    log-dead-letters-during-shutdown: off
+    actor:
+      warn-about-java-serializer-usage: false
+      provider: cluster
+    persistence:
+      journal:
+        plugin: akka.persistence.journal.inmem
+        leveldb.dir: actor/persistence/journal
+      snapshot-store:
+        plugin: akka.persistence.snapshot-store.local
+        local.dir: actor/persistence/snapshots
+    remote:
+      watch-failure-detector:
+        acceptable-heartbeat-pause: 6s
+      artery:
+        enabled: on
+        transport: tcp
+        advanced:
+          outbound-message-queue-size: 20000
+        canonical:
+          hostname: ${alpha.server.host}
+          port: 8070
+    cluster:
+      auto-down-unreachable-after: "off" # disable automatic downing
+      failure-detector:
+        heartbeat-interval: 3s
+        acceptable-heartbeat-pause: 6s
+      seed-nodes: ["akka://alpha-cluster@0.0.0.0:8070"]
+    sharding:
+      state-store-mode: ddata
+      remember-entities: "on"
+      shard-failure-backoff: 5s
+
+management:
+  endpoints:
+    web:
+      exposure:
+        include: "*"
+  health:
+    redis:
+      enabled: false
+    elasticsearch:
+      enabled: false
+
+---
+spring:
+  profiles: ssl
+alpha:
+  server:
+    ssl:
+      enable: true
+      cert: server.crt
+      key: server.pem
+      mutualAuth: true
+      clientCert: client.crt
+
+---
+spring:
+  profiles: prd
+  datasource:
+    username: saga
+    password: password
+    url: jdbc:postgresql://postgresql.servicecomb.io:5432/saga?useSSL=false
+    platform: postgresql
+    continue-on-error: false
+  jpa:
+    properties:
+      eclipselink:
+        ddl-generation: none
+
+---
+spring:
+  profiles: mysql
+  datasource:
+    username: saga
+    password: password
+    url: jdbc:mysql://mysql.servicecomb.io:3306/saga?useSSL=false
+    platform: mysql
+    continue-on-error: false
+  jpa:
+    properties:
+      eclipselink:
+        ddl-generation: none
+
+---
+spring:
+  profiles: cluster
+
+alpha:
+  feature:
+    akka:
+      enabled: true
+      channel:
+        type: kafka
+
+akkaConfig:
+  akka:
+    actor:
+      provider: cluster
+    persistence:
+      at-least-once-delivery:
+        redeliver-interval: 10s
+        redelivery-burst-limit: 2000
+      journal:
+        plugin: akka-persistence-redis.journal
+      snapshot-store:
+        plugin: akka-persistence-redis.snapshot
+    sharding:
+      state-store-mode: persistence
+    kafka:
+      consumer:
+        poll-interval: 50ms
+        stop-timeout: 30s
+        close-timeout: 20s
+        commit-timeout: 15s
+        commit-time-warning: 5s
+        commit-refresh-interval: infinite
+        use-dispatcher: "akka.kafka.saga-kafka"
+        kafka-clients.enable.auto.commit: false
+        wait-close-partition: 500ms
+        position-timeout: 10s
+        offset-for-times-timeout: 10s
+        metadata-request-timeout: 10s
+        eos-draining-check-interval: 30ms
+        partition-handler-warning: 5s
+        connection-checker.enable: false
+        connection-checker.max-retries: 3
+        connection-checker.check-interval: 15s
+        connection-checker.backoff-factor: 2.0
+      saga-kafka:
+        type: "Dispatcher"
+        executor: "thread-pool-executor"
+        thread-pool-executor:
+          fixed-pool-size: 20
+
+
+akka-persistence-redis:
+  redis:
+    mode: "simple"
+    host: "127.0.0.1"
+    port: 6379
+    database: 0
\ No newline at end of file


[servicecomb-pack] 15/42: SCB-1368 static variable name is written in upper case letters

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit df1c109b52108cf63813c16d23a3a66ac03a3453
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 21 00:43:27 2019 +0800

    SCB-1368 static variable name is written in upper case letters
---
 .../pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java      | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java
index 95de39b..52de8ef 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java
@@ -26,7 +26,7 @@ import org.springframework.kafka.core.KafkaTemplate;
 
 public class KafkaMessagePublisher implements MessagePublisher<BaseEvent> {
 
-    private static final Logger logger = LoggerFactory.getLogger(KafkaMessagePublisher.class);
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaMessagePublisher.class);
 
     private String topic;
     private KafkaTemplate<String, Object> kafkaTemplate;
@@ -38,14 +38,13 @@ public class KafkaMessagePublisher implements MessagePublisher<BaseEvent> {
 
     @Override
     public void publish(BaseEvent data) {
-        if(logger.isDebugEnabled()){
-            logger.debug("send message [{}] to [{}]", data, topic);
+        if(LOG.isDebugEnabled()){
+            LOG.debug("send to kafka {} {} to {}", data.getGlobalTxId(), data.getType(), topic);
         }
-
         try {
             kafkaTemplate.send(topic, data.getGlobalTxId(), data).get();
         } catch (InterruptedException | ExecutionException | UnsupportedOperationException e) {
-            logger.error("publish Exception = [{}]", e.getMessage(), e);
+            LOG.error("publish Exception = [{}]", e.getMessage(), e);
             throw new RuntimeException(e);
         }
     }


[servicecomb-pack] 20/42: SCB-1368 Delete the Actor state persistent data after transaction data is saved successfully

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 6779ea15fa757ba10535391e01dac412f1dbe518
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 21 01:06:05 2019 +0800

    SCB-1368 Delete the Actor state persistent data after transaction data is saved successfully
---
 .../servicecomb/pack/alpha/fsm/SagaActor.java      | 278 +++++++++++++--------
 .../src/main/resources/application.yaml            |  19 +-
 2 files changed, 180 insertions(+), 117 deletions(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
index ba4b0ff..4bd536e 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
@@ -17,7 +17,9 @@
 
 package org.apache.servicecomb.pack.alpha.fsm;
 
+import akka.actor.PoisonPill;
 import akka.actor.Props;
+import akka.cluster.sharding.ShardRegion;
 import akka.persistence.fsm.AbstractPersistentFSM;
 import java.lang.invoke.MethodHandles;
 import java.util.Arrays;
@@ -27,6 +29,7 @@ import java.util.concurrent.TimeUnit;
 import org.apache.servicecomb.pack.alpha.core.AlphaException;
 import org.apache.servicecomb.pack.alpha.core.fsm.SuspendedType;
 import org.apache.servicecomb.pack.alpha.core.fsm.TxState;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
 import org.apache.servicecomb.pack.alpha.fsm.domain.AddTxEventDomain;
 import org.apache.servicecomb.pack.alpha.fsm.domain.DomainEvent;
 import org.apache.servicecomb.pack.alpha.fsm.domain.SagaEndedDomain;
@@ -72,6 +75,7 @@ public class SagaActor extends
     when(SagaActorState.IDLE,
         matchEvent(SagaStartedEvent.class,
             (event, data) -> {
+              log(event);
               sagaBeginTime = System.currentTimeMillis();
               SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(context().system()).doSagaBeginCounter();
               SagaStartedDomain domainEvent = new SagaStartedDomain(event);
@@ -92,6 +96,7 @@ public class SagaActor extends
     when(SagaActorState.READY,
         matchEvent(TxStartedEvent.class, SagaData.class,
             (event, data) -> {
+              log(event);
               AddTxEventDomain domainEvent = new AddTxEventDomain(event);
               if (data.getExpirationTime() != null) {
                 return goTo(SagaActorState.PARTIALLY_ACTIVE)
@@ -104,12 +109,14 @@ public class SagaActor extends
             }
         ).event(SagaEndedEvent.class,
             (event, data) -> {
+              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.SUSPENDED, SuspendedType.UNPREDICTABLE);
               return goTo(SagaActorState.SUSPENDED)
                   .applying(domainEvent);
             }
         ).event(SagaAbortedEvent.class,
             (event, data) -> {
+              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.SUSPENDED, SuspendedType.UNPREDICTABLE);
               return goTo(SagaActorState.SUSPENDED)
                   .applying(domainEvent);
@@ -125,6 +132,7 @@ public class SagaActor extends
     when(SagaActorState.PARTIALLY_ACTIVE,
         matchEvent(TxEndedEvent.class, SagaData.class,
             (event, data) -> {
+              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               if (data.getExpirationTime() != null) {
                 return goTo(SagaActorState.PARTIALLY_COMMITTED)
@@ -137,6 +145,7 @@ public class SagaActor extends
             }
         ).event(TxStartedEvent.class,
             (event, data) -> {
+              log(event);
               AddTxEventDomain domainEvent = new AddTxEventDomain(event);
               if (data.getExpirationTime() != null) {
                 return stay()
@@ -148,6 +157,7 @@ public class SagaActor extends
             }
         ).event(SagaTimeoutEvent.class,
             (event, data) -> {
+              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.SUSPENDED,
                   SuspendedType.TIMEOUT);
               return goTo(SagaActorState.SUSPENDED)
@@ -155,6 +165,7 @@ public class SagaActor extends
             }
         ).event(TxAbortedEvent.class,
             (event, data) -> {
+              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               return goTo(SagaActorState.FAILED)
                   .applying(domainEvent);
@@ -169,6 +180,7 @@ public class SagaActor extends
     when(SagaActorState.PARTIALLY_COMMITTED,
         matchEvent(TxStartedEvent.class,
             (event, data) -> {
+              log(event);
               AddTxEventDomain domainEvent = new AddTxEventDomain(event);
               if (data.getExpirationTime() != null) {
                 return goTo(SagaActorState.PARTIALLY_ACTIVE)
@@ -181,6 +193,7 @@ public class SagaActor extends
             }
         ).event(TxEndedEvent.class,
             (event, data) -> {
+              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               if (data.getExpirationTime() != null) {
                 return stay()
@@ -192,23 +205,27 @@ public class SagaActor extends
             }
         ).event(SagaTimeoutEvent.class,
             (event, data) -> {
+              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.SUSPENDED, SuspendedType.TIMEOUT);
               return goTo(SagaActorState.SUSPENDED)
                   .applying(domainEvent);
             }
         ).event(SagaEndedEvent.class,
             (event, data) -> {
+              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.COMMITTED);
               return goTo(SagaActorState.COMMITTED)
                   .applying(domainEvent);
             }
         ).event(SagaAbortedEvent.class,
             (event, data) -> {
+              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.FAILED);
               return goTo(SagaActorState.FAILED).applying(domainEvent);
             }
         ).event(TxAbortedEvent.class,
             (event, data) -> {
+              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               return goTo(SagaActorState.FAILED).applying(domainEvent);
             }
@@ -222,12 +239,14 @@ public class SagaActor extends
     when(SagaActorState.FAILED,
         matchEvent(SagaTimeoutEvent.class, SagaData.class,
             (event, data) -> {
+              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.SUSPENDED, SuspendedType.TIMEOUT);
               return goTo(SagaActorState.SUSPENDED)
                   .applying(domainEvent);
             }
         ).event(TxCompensatedEvent.class, SagaData.class,
             (event, data) -> {
+              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               return stay().applying(domainEvent).andThen(exec(_data -> {
                 self().tell(ComponsitedCheckEvent.builder().build(), self());
@@ -235,6 +254,7 @@ public class SagaActor extends
             }
         ).event(ComponsitedCheckEvent.class, SagaData.class,
             (event, data) -> {
+              log(event);
               if (hasCompensationSentTx(data) || !data.isTerminated()) {
                 return stay();
               } else {
@@ -246,6 +266,7 @@ public class SagaActor extends
             }
         ).event(SagaAbortedEvent.class, SagaData.class,
             (event, data) -> {
+              log(event);
               data.setTerminated(true);
               if (hasCommittedTx(data)) {
                 SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.FAILED);
@@ -264,11 +285,13 @@ public class SagaActor extends
             }
         ).event(TxStartedEvent.class, SagaData.class,
             (event, data) -> {
+              log(event);
               AddTxEventDomain domainEvent = new AddTxEventDomain(event);
               return stay().applying(domainEvent);
             }
         ).event(TxEndedEvent.class, SagaData.class,
             (event, data) -> {
+              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               return stay().applying(domainEvent).andThen(exec(_data -> {
                 TxEntity txEntity = _data.getTxEntityMap().get(event.getLocalTxId());
@@ -287,27 +310,8 @@ public class SagaActor extends
     when(SagaActorState.COMMITTED,
         matchEvent(org.apache.servicecomb.pack.alpha.core.fsm.event.internal.StopEvent.class,
             (event, data) -> {
-              //  已经停止的Actor使用以下两个命令清理,但是 highestSequenceNr 不会被删除,需要手工清理
-              //  以下基于 journal-redis 说明:
-              //    假设 globalTxId=ed2cdb9c-e86c-4b01-9f43-8e34704e7694, 那么在 Redis 中会生成三个 key
-              //    journal:persistenceIds
-              //    journal:persisted:ed2cdb9c-e86c-4b01-9f43-8e34704e7694
-              //    journal:persisted:ed2cdb9c-e86c-4b01-9f43-8e34704e7694:highestSequenceNr
-              //
-              //    1. journal:persistenceIds 是 set 类型, 记录了所有的 globalTxId, 使用 smembers journal:persistenceIds 可以看到
-              //    2. journal:persisted:ed2cdb9c-e86c-4b01-9f43-8e34704e7694 是 zset 类型, 记录了这个事务的所有事件
-              //       使用 zrange journal:persisted:ed2cdb9c-e86c-4b01-9f43-8e34704e7694 1 -1 可以看到
-              //    3. journal:persisted:ed2cdb9c-e86c-4b01-9f43-8e34704e7694:highestSequenceNr 是 string 类型, 里面记录这序列号
-              //
-              //    何如清理:
-              //      通过 deleteMessages 和 deleteSnapshot 可以清理部分数据,但是 highestSequenceNr 还是无法自动删除,需要定期手动清理
-              //      遍历 journal:persistenceIds 集合,用每一条数据item拼接成key journal:persisted:item 和 journal:persisted:item:highestSequenceNr
-              //      如果没有成对出现就说明是已经终止的actor 那么可以将 journal:persisted:item 从 journal:persistenceIds 删除
-              //      并删除 journal:persisted:item:highestSequenceNr
-              //
-              //  目前可以看到的解释是 https://github.com/akka/akka/issues/21181
-              deleteMessages(lastSequenceNr());
-              deleteSnapshot(snapshotSequenceNr());
+              log(event);
+              beforeStop(stateName(), data);
               return stop();
             }
         )
@@ -316,8 +320,8 @@ public class SagaActor extends
     when(SagaActorState.SUSPENDED,
         matchEvent(org.apache.servicecomb.pack.alpha.core.fsm.event.internal.StopEvent.class,
             (event, data) -> {
-              deleteMessages(lastSequenceNr());
-              deleteSnapshot(snapshotSequenceNr());
+              log(event);
+              beforeStop(stateName(), data);
               return stop();
             }
         )
@@ -326,8 +330,8 @@ public class SagaActor extends
     when(SagaActorState.COMPENSATED,
         matchEvent(org.apache.servicecomb.pack.alpha.core.fsm.event.internal.StopEvent.class,
             (event, data) -> {
-              deleteMessages(lastSequenceNr());
-              deleteSnapshot(snapshotSequenceNr());
+              log(event);
+              beforeStop(stateName(), data);
               return stop();
             }
         )
@@ -348,13 +352,14 @@ public class SagaActor extends
                 .putSagaData(stateData().getGlobalTxId(), stateData());
           }
           if (LOG.isDebugEnabled()) {
-            LOG.debug("transition {} {} -> {}", getSelf(), from, to);
+            LOG.debug("transition {} {} -> {}", stateData().getGlobalTxId(), from, to);
           }
           if (to == SagaActorState.COMMITTED ||
               to == SagaActorState.SUSPENDED ||
               to == SagaActorState.COMPENSATED) {
             self().tell(org.apache.servicecomb.pack.alpha.core.fsm.event.internal.StopEvent.builder().build(), self());
           }
+          LOG.info("transition {} {} -> {}", stateData().getGlobalTxId(), from, to);
         })
     );
 
@@ -362,102 +367,151 @@ public class SagaActor extends
         matchStop(
             Normal(), (state, data) -> {
               if (LOG.isDebugEnabled()) {
-                LOG.debug("stop {} {}", data.getGlobalTxId(), state);
+                LOG.debug("saga actor stopped {} {}", getSelf(), state);
               }
-              sagaEndTime = System.currentTimeMillis();
-              SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(context().system()).doSagaEndCounter();
-              SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(context().system()).doSagaAvgTime(sagaEndTime - sagaBeginTime);
-              data.setLastState(state);
-              data.setEndTime(new Date());
-              data.setTerminated(true);
-              SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(getContext().getSystem())
-                  .stopSagaData(data.getGlobalTxId(), data);
+              LOG.info("stopped {} {}", data.getGlobalTxId(), state);
             }
         )
     );
 
   }
 
-  @Override
-  public SagaData applyEvent(DomainEvent event, SagaData data) {
-    if (this.recoveryRunning()) {
-      LOG.info("SagaActor recovery {}",event.getEvent());
-    }
+  private void beforeStop(SagaActorState state, SagaData data){
     if (LOG.isDebugEnabled()) {
-      LOG.debug("SagaActor apply event {}", event.getEvent());
+      LOG.debug("stop {} {}", data.getGlobalTxId(), state);
     }
-    // log event to SagaData
-    if (event.getEvent() != null && !(event
-        .getEvent() instanceof ComponsitedCheckEvent)) {
-      data.logEvent(event.getEvent());
+    try{
+      sagaEndTime = System.currentTimeMillis();
+      data.setLastState(state);
+      data.setEndTime(new Date());
+      data.setTerminated(true);
+      SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(getContext().getSystem())
+          .stopSagaData(data.getGlobalTxId(), data);
+      SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(context().system()).doSagaEndCounter();
+      SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(context().system())
+          .doSagaAvgTime(sagaEndTime - sagaBeginTime);
+
+      // destroy self from cluster shard region
+      getContext().getParent()
+          .tell(new ShardRegion.Passivate(PoisonPill.getInstance()), getSelf());
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("destroy saga actor {} from cluster shard region", getSelf());
+      }
+
+      // clear self mailbox from persistence
+      //  已经停止的Actor使用以下两个命令清理,但是 highestSequenceNr 不会被删除,需要手工清理
+      //  以下基于 journal-redis 说明:
+      //    假设 globalTxId=ed2cdb9c-e86c-4b01-9f43-8e34704e7694, 那么在 Redis 中会生成三个 key
+      //    journal:persistenceIds
+      //    journal:persisted:ed2cdb9c-e86c-4b01-9f43-8e34704e7694
+      //    journal:persisted:ed2cdb9c-e86c-4b01-9f43-8e34704e7694:highestSequenceNr
+      //
+      //    1. journal:persistenceIds 是 set 类型, 记录了所有的 globalTxId, 使用 smembers journal:persistenceIds 可以看到
+      //    2. journal:persisted:ed2cdb9c-e86c-4b01-9f43-8e34704e7694 是 zset 类型, 记录了这个事务的所有事件
+      //       使用 zrange journal:persisted:ed2cdb9c-e86c-4b01-9f43-8e34704e7694 1 -1 可以看到
+      //    3. journal:persisted:ed2cdb9c-e86c-4b01-9f43-8e34704e7694:highestSequenceNr 是 string 类型, 里面记录这序列号
+      //
+      //    何如清理:
+      //      通过 deleteMessages 和 deleteSnapshot 可以清理部分数据,但是 highestSequenceNr 还是无法自动删除,需要定期手动清理
+      //      遍历 journal:persistenceIds 集合,用每一条数据item拼接成key journal:persisted:item 和 journal:persisted:item:highestSequenceNr
+      //      如果没有成对出现就说明是已经终止的actor 那么可以将 journal:persisted:item 从 journal:persistenceIds 删除
+      //      并删除 journal:persisted:item:highestSequenceNr
+      //
+      //  目前可以看到的解释是 https://github.com/akka/akka/issues/21181
+      deleteMessages(lastSequenceNr());
+      deleteSnapshot(snapshotSequenceNr());
+    }catch(Exception e){
+      LOG.error("stop {} fail",data.getGlobalTxId());
+      throw e;
     }
-    if (event instanceof SagaStartedDomain) {
-      SagaStartedDomain domainEvent = (SagaStartedDomain) event;
-      data.setServiceName(domainEvent.getEvent().getServiceName());
-      data.setInstanceId(domainEvent.getEvent().getInstanceId());
-      data.setGlobalTxId(domainEvent.getEvent().getGlobalTxId());
-      data.setBeginTime(domainEvent.getEvent().getCreateTime());
-      data.setExpirationTime(domainEvent.getExpirationTime());
-    } else if (event instanceof AddTxEventDomain) {
-      AddTxEventDomain domainEvent = (AddTxEventDomain) event;
-      if (!data.getTxEntityMap().containsKey(domainEvent.getEvent().getLocalTxId())) {
-        TxEntity txEntity = TxEntity.builder()
-            .serviceName(domainEvent.getEvent().getServiceName())
-            .instanceId(domainEvent.getEvent().getInstanceId())
-            .globalTxId(domainEvent.getEvent().getGlobalTxId())
-            .localTxId(domainEvent.getEvent().getLocalTxId())
-            .parentTxId(domainEvent.getEvent().getParentTxId())
-            .compensationMethod(domainEvent.getCompensationMethod())
-            .payloads(domainEvent.getPayloads())
-            .state(domainEvent.getState())
-            .beginTime(domainEvent.getEvent().getCreateTime())
-            .build();
-        data.getTxEntityMap().put(txEntity.getLocalTxId(), txEntity);
-      } else {
-        LOG.warn("TxEntity {} already exists", domainEvent.getEvent().getLocalTxId());
+  }
+
+  @Override
+  public SagaData applyEvent(DomainEvent event, SagaData data) {
+    try{
+      if (this.recoveryRunning()) {
+        LOG.info("SagaActor recovery {}",event.getEvent());
+      }else if (LOG.isDebugEnabled()) {
+        LOG.debug("SagaActor apply event {}", event.getEvent());
       }
-    } else if (event instanceof UpdateTxEventDomain) {
-      UpdateTxEventDomain domainEvent = (UpdateTxEventDomain) event;
-      TxEntity txEntity = data.getTxEntityMap().get(domainEvent.getLocalTxId());
-      txEntity.setEndTime(domainEvent.getEvent().getCreateTime());
-      if (domainEvent.getState() == TxState.COMMITTED) {
-        txEntity.setState(domainEvent.getState());
-      } else if (domainEvent.getState() == TxState.FAILED) {
-        txEntity.setState(domainEvent.getState());
-        txEntity.setThrowablePayLoads(domainEvent.getThrowablePayLoads());
-        data.getTxEntityMap().forEach((k, v) -> {
-          if (v.getState() == TxState.COMMITTED) {
-            // call compensate
-            compensation(v, data);
-          }
-        });
-      } else if (domainEvent.getState() == TxState.COMPENSATED) {
-        // decrement the compensation running counter by one
-        data.getCompensationRunningCounter().decrementAndGet();
-        txEntity.setState(domainEvent.getState());
-        LOG.info("compensation is completed {}", txEntity.getLocalTxId());
+      // log event to SagaData
+      if (event.getEvent() != null && !(event
+          .getEvent() instanceof ComponsitedCheckEvent)) {
+        data.logEvent(event.getEvent());
       }
-    } else if (event instanceof SagaEndedDomain) {
-      SagaEndedDomain domainEvent = (SagaEndedDomain) event;
-      if (domainEvent.getState() == SagaActorState.FAILED) {
-        data.setTerminated(true);
-        data.getTxEntityMap().forEach((k, v) -> {
-          if (v.getState() == TxState.COMMITTED) {
-            // call compensate
-            compensation(v, data);
-          }
-        });
-      } else if (domainEvent.getState() == SagaActorState.SUSPENDED) {
-        data.setEndTime(event.getEvent().getCreateTime());
-        data.setTerminated(true);
-        data.setSuspendedType(domainEvent.getSuspendedType());
-      } else if (domainEvent.getState() == SagaActorState.COMPENSATED) {
-        data.setEndTime(event.getEvent().getCreateTime());
-        data.setTerminated(true);
-      } else if (domainEvent.getState() == SagaActorState.COMMITTED) {
-        data.setEndTime(event.getEvent().getCreateTime());
-        data.setTerminated(true);
+      if (event instanceof SagaStartedDomain) {
+        SagaStartedDomain domainEvent = (SagaStartedDomain) event;
+        data.setServiceName(domainEvent.getEvent().getServiceName());
+        data.setInstanceId(domainEvent.getEvent().getInstanceId());
+        data.setGlobalTxId(domainEvent.getEvent().getGlobalTxId());
+        data.setBeginTime(domainEvent.getEvent().getCreateTime());
+        data.setExpirationTime(domainEvent.getExpirationTime());
+      } else if (event instanceof AddTxEventDomain) {
+        AddTxEventDomain domainEvent = (AddTxEventDomain) event;
+        if (!data.getTxEntityMap().containsKey(domainEvent.getEvent().getLocalTxId())) {
+          TxEntity txEntity = TxEntity.builder()
+              .serviceName(domainEvent.getEvent().getServiceName())
+              .instanceId(domainEvent.getEvent().getInstanceId())
+              .globalTxId(domainEvent.getEvent().getGlobalTxId())
+              .localTxId(domainEvent.getEvent().getLocalTxId())
+              .parentTxId(domainEvent.getEvent().getParentTxId())
+              .compensationMethod(domainEvent.getCompensationMethod())
+              .payloads(domainEvent.getPayloads())
+              .state(domainEvent.getState())
+              .beginTime(domainEvent.getEvent().getCreateTime())
+              .build();
+          data.getTxEntityMap().put(txEntity.getLocalTxId(), txEntity);
+        } else {
+          LOG.warn("TxEntity {} already exists", domainEvent.getEvent().getLocalTxId());
+        }
+      } else if (event instanceof UpdateTxEventDomain) {
+        UpdateTxEventDomain domainEvent = (UpdateTxEventDomain) event;
+        TxEntity txEntity = data.getTxEntityMap().get(domainEvent.getLocalTxId());
+        txEntity.setEndTime(domainEvent.getEvent().getCreateTime());
+        if (domainEvent.getState() == TxState.COMMITTED) {
+          txEntity.setState(domainEvent.getState());
+        } else if (domainEvent.getState() == TxState.FAILED) {
+          txEntity.setState(domainEvent.getState());
+          txEntity.setThrowablePayLoads(domainEvent.getThrowablePayLoads());
+          data.getTxEntityMap().forEach((k, v) -> {
+            if (v.getState() == TxState.COMMITTED) {
+              // call compensate
+              compensation(v, data);
+            }
+          });
+        } else if (domainEvent.getState() == TxState.COMPENSATED) {
+          // decrement the compensation running counter by one
+          data.getCompensationRunningCounter().decrementAndGet();
+          txEntity.setState(domainEvent.getState());
+          LOG.info("compensation is completed {}", txEntity.getLocalTxId());
+        }
+      } else if (event instanceof SagaEndedDomain) {
+        SagaEndedDomain domainEvent = (SagaEndedDomain) event;
+        if (domainEvent.getState() == SagaActorState.FAILED) {
+          data.setTerminated(true);
+          data.getTxEntityMap().forEach((k, v) -> {
+            if (v.getState() == TxState.COMMITTED) {
+              // call compensate
+              compensation(v, data);
+            }
+          });
+        } else if (domainEvent.getState() == SagaActorState.SUSPENDED) {
+          data.setEndTime(event.getEvent().getCreateTime());
+          data.setTerminated(true);
+          data.setSuspendedType(domainEvent.getSuspendedType());
+        } else if (domainEvent.getState() == SagaActorState.COMPENSATED) {
+          data.setEndTime(event.getEvent().getCreateTime());
+          data.setTerminated(true);
+        } else if (domainEvent.getState() == SagaActorState.COMMITTED) {
+          data.setEndTime(event.getEvent().getCreateTime());
+          data.setTerminated(true);
+        }
       }
+    }catch (Exception ex){
+      LOG.error("SagaActor apply event {}", event.getEvent());
+      beforeStop(SagaActorState.SUSPENDED, data);
+      stop();
+      //TODO 增加 SagaActor 处理失败指标
     }
     return data;
   }
@@ -531,4 +585,10 @@ public class SagaActor extends
       }
     }
   }
+
+  private void log(BaseEvent event) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug(event.toString());
+    }
+  }
 }
diff --git a/alpha/alpha-server/src/main/resources/application.yaml b/alpha/alpha-server/src/main/resources/application.yaml
index 98a58ce..664f692 100644
--- a/alpha/alpha-server/src/main/resources/application.yaml
+++ b/alpha/alpha-server/src/main/resources/application.yaml
@@ -90,7 +90,7 @@ akkaConfig:
         acceptable-heartbeat-pause: 6s
       seed-nodes: ["akka://alpha-cluster@127.0.0.1:8070"]
       sharding:
-        state-store-mode: "persistence"
+        state-store-mode: "ddata" #ddata,persistence
         remember-entities: true
         shard-failure-backoff: 5s
 
@@ -173,21 +173,24 @@ akkaConfig:
         commit-timeout: 15s
         commit-time-warning: 1s
         commit-refresh-interval: infinite
-        use-dispatcher: "akka.kafka.default-dispatcher"
+        use-dispatcher: "akka.kafka.saga-kafka"
         kafka-clients.enable.auto.commit: false
         wait-close-partition: 500ms
-        position-timeout: 5s
-        offset-for-times-timeout: 5s
-        metadata-request-timeout: 5s
+        position-timeout: 10s
+        offset-for-times-timeout: 10s
+        metadata-request-timeout: 10s
         eos-draining-check-interval: 30ms
         partition-handler-warning: 5s
         connection-checker.enable: false
         connection-checker.max-retries: 3
         connection-checker.check-interval: 15s
         connection-checker.backoff-factor: 2.0
-        max-batch: 1000
-        max-interval: 10s
-        parallelism: 1
+      saga-kafka:
+        type: "Dispatcher"
+        executor: "thread-pool-executor"
+        thread-pool-executor:
+          fixed-pool-size: 20
+
 
 akka-persistence-redis:
   redis:


[servicecomb-pack] 19/42: SCB-1368 Clean up kafka client extra dependencies

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 2d26788d8c754c8c24f0afde08da405ea5d3232e
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 21 01:01:31 2019 +0800

    SCB-1368 Clean up kafka client extra dependencies
---
 alpha/alpha-fsm/pom.xml | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/alpha/alpha-fsm/pom.xml b/alpha/alpha-fsm/pom.xml
index 48bcada..8f53916 100644
--- a/alpha/alpha-fsm/pom.xml
+++ b/alpha/alpha-fsm/pom.xml
@@ -140,11 +140,6 @@
       <groupId>com.typesafe.akka</groupId>
       <artifactId>akka-slf4j_2.12</artifactId>
     </dependency>
-    <dependency>
-      <groupId>org.apache.kafka</groupId>
-      <artifactId>kafka-clients</artifactId>
-      <version>2.1.1</version>
-    </dependency>
 
     <!--
       jmx over http


[servicecomb-pack] 33/42: SCB-1368 Use dependency management to define the Kafka version

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit bf2b577b60bf82987dd9af246341646dc13a226d
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Fri Sep 27 18:05:49 2019 +0800

    SCB-1368 Use dependency management to define the Kafka version
---
 alpha/alpha-server/pom.xml | 1 -
 pom.xml                    | 6 ++++++
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/alpha/alpha-server/pom.xml b/alpha/alpha-server/pom.xml
index d6be3b7..d95b522 100644
--- a/alpha/alpha-server/pom.xml
+++ b/alpha/alpha-server/pom.xml
@@ -204,7 +204,6 @@
     <dependency>
       <groupId>org.apache.kafka</groupId>
       <artifactId>kafka-clients</artifactId>
-      <version>2.1.1</version>
     </dependency>
   </dependencies>
 
diff --git a/pom.xml b/pom.xml
index d41d63f..c001d6a 100644
--- a/pom.xml
+++ b/pom.xml
@@ -75,6 +75,7 @@
     <netty.boringssl.version>2.0.7.Final</netty.boringssl.version>
     <netty.version>4.1.24.Final</netty.version>
     <zookeeper.version>3.4.13</zookeeper.version>
+    <kafka.version>2.1.1</kafka.version>
   </properties>
 
   <name>ServiceComb Saga</name>
@@ -528,6 +529,11 @@
         <artifactId>commons-lang3</artifactId>
         <version>3.6</version>
       </dependency>
+      <dependency>
+        <groupId>org.apache.kafka</groupId>
+        <artifactId>kafka-clients</artifactId>
+        <version>${kafka.version}</version>
+      </dependency>
 
       <!-- test dependencies -->
       <dependency>


[servicecomb-pack] 02/42: SCB-1368 Add akka cluster property adapter

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 711eebc0634d014742bfdfcf4ef5572c1f7d5e2c
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Wed Aug 14 17:07:56 2019 +0800

    SCB-1368 Add akka cluster property adapter
---
 .../integration/akka/AkkaClusterListener.java      | 80 ++++++++++++++++++++++
 .../akka/AkkaConfigPropertyAdapter.java            | 26 +++++--
 .../src/main/resources/application.yaml            | 10 +++
 3 files changed, 111 insertions(+), 5 deletions(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/AkkaClusterListener.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/AkkaClusterListener.java
new file mode 100644
index 0000000..418cd2e
--- /dev/null
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/AkkaClusterListener.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.servicecomb.pack.alpha.fsm.spring.integration.akka;
+
+import akka.actor.AbstractActor;
+import akka.cluster.Cluster;
+import akka.cluster.ClusterEvent;
+import akka.cluster.ClusterEvent.MemberEvent;
+import akka.cluster.ClusterEvent.MemberRemoved;
+import akka.cluster.ClusterEvent.MemberUp;
+import akka.cluster.ClusterEvent.UnreachableMember;
+import akka.event.Logging;
+import akka.event.LoggingAdapter;
+import java.lang.invoke.MethodHandles;
+import java.util.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class AkkaClusterListener extends AbstractActor {
+
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+  LoggingAdapter AKKA_LOG = Logging.getLogger(getContext().getSystem(), this);
+  Cluster cluster = Cluster.get(getContext().getSystem());
+
+  @Override
+  public Receive createReceive() {
+    return receiveBuilder()
+        .match(MemberUp.class, mUp -> {
+          LOG.info("Member is Up: {}", mUp.member());
+        })
+        .match(UnreachableMember.class, mUnreachable -> {
+          LOG.info("Member detected as unreachable: {}", mUnreachable.member());
+        })
+        .match(MemberRemoved.class, mRemoved -> {
+          LOG.info("Member is Removed: {}", mRemoved.member());
+        })
+        .match(MemberEvent.class, message -> {
+          // ignore
+        })
+        .matchAny(msg -> AKKA_LOG.warning("Received unknown message: {}", msg))
+        .build();
+  }
+
+  //subscribe to cluster changes
+  @Override
+  public void preStart() {
+    cluster.subscribe(getSelf(), ClusterEvent.initialStateAsEvents(),
+        MemberEvent.class, UnreachableMember.class);
+  }
+
+  //re-subscribe when restart
+  @Override
+  public void postStop() {
+    cluster.unsubscribe(getSelf());
+  }
+
+  @Override
+  public void preRestart(Throwable reason, Optional<Object> message) {
+    AKKA_LOG.error(
+        reason,
+        "Restarting due to [{}] when processing [{}]",
+        reason.getMessage(),
+        message.isPresent() ? message.get() : "");
+  }
+}
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/AkkaConfigPropertyAdapter.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/AkkaConfigPropertyAdapter.java
index c6ae195..d364da7 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/AkkaConfigPropertyAdapter.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/AkkaConfigPropertyAdapter.java
@@ -31,10 +31,15 @@ public class AkkaConfigPropertyAdapter {
 
   private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
   public static final String PROPERTY_SOURCE_NAME = "akkaConfig.";
+  static final String AKKA_CLUSTER_SEED_NODES_KEY = "akka.cluster.seed-nodes";
+  static final String AKKA_ESTENSIONS_KEY = "akka.extensions";
+  static final String AKKA_LOGGERS_KEY = "akka.loggers";
 
   public static Map<String, Object> getPropertyMap(ConfigurableEnvironment environment) {
     final Map<String, Object> propertyMap = new HashMap<>();
-
+    final List<String> seedNodes = new ArrayList<>();
+    final List<String> extensions = new ArrayList<>();
+    final List<String> loggers = new ArrayList<>();
     for (final PropertySource source : environment.getPropertySources()) {
       if (isEligiblePropertySource(source)) {
         final EnumerablePropertySource enumerable = (EnumerablePropertySource) source;
@@ -42,14 +47,25 @@ public class AkkaConfigPropertyAdapter {
         for (final String name : enumerable.getPropertyNames()) {
           if (name.startsWith(PROPERTY_SOURCE_NAME) && !propertyMap.containsKey(name)) {
             String key = name.substring(PROPERTY_SOURCE_NAME.length());
-            Object value = environment.getProperty(name);
-            if (LOG.isTraceEnabled()) {
-              LOG.trace("Adding property {}={}" + key, value);
+            String value = environment.getProperty(name);
+            if (key.startsWith(AKKA_CLUSTER_SEED_NODES_KEY)) {
+              seedNodes.add(value);
+            } else if (key.startsWith(AKKA_ESTENSIONS_KEY)) {
+              extensions.add(value);
+            } else if (key.startsWith(AKKA_LOGGERS_KEY)) {
+              loggers.add(value);
+            } else {
+              if (LOG.isTraceEnabled()) {
+                LOG.trace("Adding property {}={}" + key, value);
+              }
+              propertyMap.put(key, value);
             }
-            propertyMap.put(key, value);
           }
         }
       }
+      propertyMap.put(AKKA_CLUSTER_SEED_NODES_KEY, seedNodes);
+      propertyMap.put(AKKA_ESTENSIONS_KEY, extensions);
+      propertyMap.put(AKKA_LOGGERS_KEY, loggers);
     }
 
     return Collections.unmodifiableMap(propertyMap);
diff --git a/alpha/alpha-server/src/main/resources/application.yaml b/alpha/alpha-server/src/main/resources/application.yaml
index ed2c41e..23ae3e2 100644
--- a/alpha/alpha-server/src/main/resources/application.yaml
+++ b/alpha/alpha-server/src/main/resources/application.yaml
@@ -52,10 +52,20 @@ eureka:
 
 
 akkaConfig:
+  # persistence
   akka.persistence.journal.plugin: akka.persistence.journal.inmem
   akka.persistence.journal.leveldb.dir: target/example/journal
   akka.persistence.snapshot-store.plugin: akka.persistence.snapshot-store.local
   akka.persistence.snapshot-store.local.dir: target/example/snapshots
+  # cluster
+  akka.actor.provider: cluster
+  akka.remote.log-remote-lifecycle-events: info
+  akka.remote.netty.tcp.hostname: 127.0.0.1
+  akka.remote.netty.tcp.port: 8070
+  akka.cluster.seed-nodes: ["akka.tcp://alpha-akka@127.0.0.1:8070"]
+  #
+  akka.extensions: ["akka.cluster.metrics.ClusterMetricsExtension"]
+
 
 management:
   endpoints:


[servicecomb-pack] 32/42: SCB-1368 Updated document

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 24fd1bf00c9299036f95686bce65aabecd466a9c
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Fri Sep 27 17:07:08 2019 +0800

    SCB-1368 Updated document
---
 .../src/main/resources/application.yaml            |   4 +-
 docs/fsm/akka_zh.md                                |  69 +++
 docs/fsm/apis_zh.md                                | 132 ++++++
 docs/fsm/assets/alpha-cluster-architecture.png     | Bin 0 -> 537656 bytes
 docs/fsm/eventchannel_zh.md                        |  34 ++
 docs/fsm/{fsm_manual_zh.md => fsm_manual.md}       | 176 ++++---
 docs/fsm/fsm_manual_zh.md                          | 176 ++++---
 docs/fsm/how_to_use_fsm.md                         | 522 --------------------
 docs/fsm/how_to_use_fsm_zh.md                      | 527 ---------------------
 docs/fsm/persistence_zh.md                         | 235 +++++++++
 docs/user_guide.md                                 |   2 +-
 docs/user_guide_zh.md                              |   2 +-
 12 files changed, 696 insertions(+), 1183 deletions(-)

diff --git a/alpha/alpha-server/src/main/resources/application.yaml b/alpha/alpha-server/src/main/resources/application.yaml
index 05bfc20..e0d5b23 100644
--- a/alpha/alpha-server/src/main/resources/application.yaml
+++ b/alpha/alpha-server/src/main/resources/application.yaml
@@ -68,10 +68,10 @@ akkaConfig:
     persistence:
       journal:
         plugin: akka.persistence.journal.inmem
-        leveldb.dir: target/example/journal
+        leveldb.dir: actor/persistence/journal
       snapshot-store:
         plugin: akka.persistence.snapshot-store.local
-        local.dir: target/example/snapshots
+        local.dir: actor/persistence/snapshots
     remote:
       watch-failure-detector:
         acceptable-heartbeat-pause: 6s
diff --git a/docs/fsm/akka_zh.md b/docs/fsm/akka_zh.md
new file mode 100644
index 0000000..d51384e
--- /dev/null
+++ b/docs/fsm/akka_zh.md
@@ -0,0 +1,69 @@
+# Akka 配置
+
+Alpha 已经定义了一些 Akka 的参数,如果要在外部修改,可以通过 `akkaConfig.{akka key} = value` 方式配置
+
+* Alpha 单例模式
+
+  状态机持久化参数
+
+  ```properties
+  akkaConfig:
+    akka:
+      persistence:
+        journal:
+          plugin: akka.persistence.journal.inmem
+          leveldb.dir: actor/persistence/journal
+        snapshot-store:
+          plugin: akka.persistence.snapshot-store.local
+          local.dir: actor/persistence/snapshots
+  ```
+
+* Alpha 集群模式
+
+  状态机持久化参数
+
+  ```properties
+  akkaConfig:
+    akka:
+      actor:
+        provider: cluster
+      persistence:
+        at-least-once-delivery:
+          redeliver-interval: 10s
+          redelivery-burst-limit: 2000
+        journal:
+          plugin: akka-persistence-redis.journal
+        snapshot-store:
+          plugin: akka-persistence-redis.snapshot
+  akka-persistence-redis:
+    redis:
+      mode: "simple"
+      host: "127.0.0.1"
+      port: 6379
+      database: 0        
+  ```
+
+  Kafka 消费者参数
+
+  ```properties
+  akkaConfig:
+    akka:
+      kafka:
+        consumer:
+          poll-interval: 50ms
+          stop-timeout: 30s
+          close-timeout: 20s
+          commit-timeout: 15s
+          commit-time-warning: 5s
+          commit-refresh-interval: infinite
+          wait-close-partition: 500ms
+          position-timeout: 10s
+          offset-for-times-timeout: 10s
+          metadata-request-timeout: 10s
+          eos-draining-check-interval: 30ms
+          partition-handler-warning: 5s
+          connection-checker.enable: false
+          connection-checker.max-retries: 3
+          connection-checker.check-interval: 15s
+          connection-checker.backoff-factor: 2.0
+  ```
\ No newline at end of file
diff --git a/docs/fsm/apis_zh.md b/docs/fsm/apis_zh.md
new file mode 100644
index 0000000..dab3577
--- /dev/null
+++ b/docs/fsm/apis_zh.md
@@ -0,0 +1,132 @@
+# APIs
+
+#### 性能度量
+
+你可以使用 API 查询 Alpha 的性能指标,你可以使用基准测试工具 `AlphaBenchmark` 模拟发送数据后快速体验这一功能
+
+例如:使用以下命令模拟 10 并发,发送 1000 个全局事务
+
+```bash
+java -jar alpha-benchmark-0.5.0-SNAPSHOT-exec.jar --alpha.cluster.address=0.0.0.0:8080 --w=0 --n=1000 --c=10
+```
+
+查询性能指标
+
+```bash
+curl http://localhost:8090/alpha/api/v1/metrics
+{
+  nodeType: "MASTER",
+  metrics: {
+    eventReceived: 8000,
+    eventAccepted: 8000,
+    eventRejected: 0,
+    eventAvgTime: 0,
+    actorReceived: 8000,
+    actorAccepted: 8000,
+    actorRejected: 0,
+    actorAvgTime: 0,
+    sagaBeginCounter: 1000,
+    sagaEndCounter: 1000,
+    sagaAvgTime: 9,
+    committed: 1000,
+    compensated: 0,
+    suspended: 0,
+    repositoryReceived: 1000,
+    repositoryAccepted: 1000,
+    repositoryRejected: 0,
+    repositoryAvgTime: 0.88
+  }
+}
+```
+
+例如以上指标中显示 `sagaAvgTime: 9` 表示每个全局事务在Akka的处理耗时9毫秒,`repositoryAvgTime: 0.88` 表示每个全局事务入库耗时0.88毫秒
+
+指标说明
+
+- eventReceived: Alpha 收到的 gRPC 事件数量
+- eventAccepted:  Alpha 处理的 gRPC 事件数量(事件放入事件通道)
+- eventRejected:  Alpha 拒绝的 gRPC 事件数量
+- eventAvgTime: Alpha 平均耗时(毫秒)
+- actorReceived: Akka 收到的事件数量
+- actorAccepted:  Akka 处理的事件数量
+- actorRejected: Akka 拒绝的事件数量
+- actorAvgTime: Akka 平均耗时(毫秒)
+- sagaBeginCounter: 开始的 Saga 全局事务数量
+- sagaEndCounter: 结束的 Saga 全局事务数量
+- sagaAvgTime: 平均耗时(毫秒)
+- committed: COMMITTED状态的 Saga 全局事务数量
+- compensated: COMPENSATED状态的 Saga 全局事务数量
+- suspended: SUSPENDED状态的 Saga 的全局事务数量
+- repositoryReceived: 存储模块收到的全局事务数量
+- repositoryAccepted: 存储模块处理的全局事务数量
+- repositoryRejected: 存储模块拒绝的全局事务数量
+- repositoryAvgTime: 平均耗时(毫秒)
+
+#### 事务数据查询
+
+> 需要启动 Elasticsearch 存储事务
+
+- 查询事务列表
+
+  ```bash
+  curl -X GET http://localhost:8090/alpha/api/v1/transaction?page=0&size=50
+  
+  {
+    "total": 2002,
+    "page": 0,
+    "size": 50,
+    "elapsed": 581,
+    "globalTransactions": [...]
+  }
+  ```
+
+  请求参数
+
+  - page 页号
+  - size 返回行数
+
+  返回参数
+
+  - total 总行数
+  - page 本次查询结果页号
+  - size 本次查询行数
+  - elapsed 本次查询耗时(毫秒)
+  - globalTransactions 事件数据列表
+
+- 查询一条事务
+
+  ```bash
+  curl -X GET http://localhost:8090/alpha/api/v1/transaction/{globalTxId}
+  
+  {
+    "globalTxId": "e00a3bac-de6b-498f-99a4-c11d3087fd14",
+    "type": "SAGA",
+    "serviceName": "alpha-benchmark",
+    "instanceId": "alpha-benchmark-127.0.0.1",
+    "beginTime": 1564762932963,
+    "endTime": 1564762933197,
+    "state": "COMMITTED",
+    "subTxSize": 3,
+    "durationTime": 408,
+    "subTransactions": [...],
+    "events": [...]
+  }
+  ```
+
+  请求参数
+
+  - globalTxId 全局事务ID
+
+  返回参数
+
+  - globalTxId 全局事务ID
+  - type 事务类型,目前只有SAGA,后期增加TCC
+  - serviceName 全局事务发起方服务名称
+  - instanceId 全局事务发起方实例ID
+  - beginTime 事务开始时间
+  - endTime 事务结束时间
+  - state 事务最终状态
+  - subTxSize 包含子事务数量
+  - durationTime 全局事务处理耗时
+  - subTransactions 子事务数据列表
+  - events 事件列表
\ No newline at end of file
diff --git a/docs/fsm/assets/alpha-cluster-architecture.png b/docs/fsm/assets/alpha-cluster-architecture.png
new file mode 100644
index 0000000..1b3c2e5
Binary files /dev/null and b/docs/fsm/assets/alpha-cluster-architecture.png differ
diff --git a/docs/fsm/eventchannel_zh.md b/docs/fsm/eventchannel_zh.md
new file mode 100644
index 0000000..3fcabcd
--- /dev/null
+++ b/docs/fsm/eventchannel_zh.md
@@ -0,0 +1,34 @@
+# 事件通道
+
+Alpha 收到 Omeag 发送的事件后放入事件通道等待 Akka 处理,事件通道有两种实现方式,一种是内存通道另一种是 Kafka 通道
+
+| 通道类型 | 模式 | 说明                                                         |
+| -------- | ---- | ------------------------------------------------------------ |
+| memory   | 单例 | 使用内存作为数据通道,不建议在生产环境使用                   |
+| kafka    | 集群 | 使用 Kafka 作为数据通道,使用全局事务ID作为分区策略,集群中的所有节点同时工作,可水平扩展,当配置了 spring.profiles.active=prd,cluster 参数后默认就使用 kafka 通道 |
+
+ 可以使用参数 `alpha.feature.akka.channel.type` 配置通道类型
+
+- Memory 通道参数
+
+| 参数名                                 | 参数值 | 说明                                        |
+| -------------------------------------- | ------ | ------------------------------------------- |
+| alpha.feature.akka.channel.type        | memory |                                             |
+| alpha.feature.akka.channel.memory.size | -1     | momory类型时内存队列大小,-1表示Integer.MAX |
+
+- Kafka 通道参数
+
+| 参数名                                  | 参数值   | 说明                                        |
+| --------------------------------------- | -------- | ------------------------------------------- |
+| alpha.feature.akka.channel.type         | kafka    |                                             |
+| spring.kafka.bootstrap-servers          | -1       | momory类型时内存队列大小,-1表示Integer.MAX |
+| spring.kafka.producer.batch-size        | 16384    |                                             |
+| spring.kafka.producer.retries           | 0        |                                             |
+| spring.kafka.producer.buffer.memory     | 33554432 |                                             |
+| spring.kafka.consumer.auto.offset.reset | earliest |                                             |
+| spring.kafka.listener.pollTimeout       | 1500     |                                             |
+| kafka.numPartitions                     | 6        |                                             |
+| kafka.replicationFactor                 | 1        |                                             |
+
+
+
diff --git a/docs/fsm/fsm_manual_zh.md b/docs/fsm/fsm_manual.md
similarity index 56%
copy from docs/fsm/fsm_manual_zh.md
copy to docs/fsm/fsm_manual.md
index 7e81473..ab42442 100755
--- a/docs/fsm/fsm_manual_zh.md
+++ b/docs/fsm/fsm_manual.md
@@ -1,15 +1,15 @@
-# 状态机模式使用手册
+# 状态机模式
 
 ServiceComb Pack 0.5.0 版本开始我们尝试使用状态机模型解决分布式事务中复杂的事件和状态关系,我们将 Alpha 看作一个可以记录每个全局事务不同状态的的盒子,Alpha 收到 Omega 发送的事务消息(全局事务启动、全局事务停止,全局事务失败,子事务启动,子事务停止,子事务失败等等)后完成一些动作(等待、补偿、超时)和状态切换。
 
-分布式事务的事件使我们面临很复杂的情况,我们希望可以通过一种DSL来清晰的定义状态机,并且能够解决状态机本身的持久化和分布式问题,再经过尝试后我们觉得 Akka FSM 是一个不错的选择。下面请跟我一起体验一下这个新功能。
+分布式事务的事件使我们面临很复杂的情况,我们希望可以通过一种DSL来清晰的定义状态机,并且能够解决状态机本身的持久化和分布式问题,再经过尝试后我们觉得 [Akka](https://github.com/akka/akka) 是一个不错的选择。下面请跟我一起体验一下这个新功能。
 
 ## 重大更新
 
 * 使用 Akka 状态机代替基于表扫描的状态判断
 * 性能提升一个数量级,事件吞吐量每秒1.8w+,全局事务处理量每秒1.2k+
 * 内置健康指标采集器,可清晰了解系统瓶颈
-* 通过多种数据通道适配实现高可用
+* 支持分布式集群
 * 向前兼容原有 gRPC 协议
 * 全新的可视化监控界面
 * 开放全新的 API
@@ -139,41 +139,109 @@ Sub Transactions 面板:本事务包含的子事务ID,子事务状态,子
 
 ## 集群
 
-可以通过部署多个 Alpha 实现处理能力的水平扩展和高可用,集群依赖 Kafka 服务。
-
-* 启动 Kafka,可以使用 docker compose 方式启动,以下是一个 compose 文件样例
-
-  ```yaml
-  version: '3.2'
-  services:
-    zookeeper:
-      image: coolbeevip/alpine-zookeeper
-      ports:
-        - 2181:2181
-    kafka:
-      image: coolbeevip/alpine-kafka
-      environment:
-        KAFKA_ADVERTISED_HOST_NAME: 192.168.1.10
-        KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
-      ports:
-        - 9092:9092
-      links:
-        - zookeeper
-  ```
-
-  **注意:** KAFKA_ADVERTISED_HOST_NAME 一定要配置成服务器的真实 IP 地址,不能配置成 127.0.0.1 或者 localhost
-
-* 启动两个 Alpha 节点  
-
-  启动 Alpha 1
+> 需要下载主干代码后自己编译 0.6.0 版本
+
+依赖 Kafka 和 Redis 我们可以部署一个具有分布式处理能力的 Alpha 集群。Alpha 集群基于 Akka Cluster Sharding 和 Akka Persistence 实现动态计算和故障自愈。
+
+![image-20190927150455006](assets/alpha-cluster-architecture.png)
+
+上边是 Alpha 集群的工作架构图,表示部署了两个 Alpha 节点,分别是 8070 和 8071(这两编号是 [Gossip](https://en.wikipedia.org/wiki/Gossip_protocol) 协议的通信端口)。Omega 消息被发送到 Kafka ,并使用 globalTxId 作为分区策略,这保证了同一个全局事务下的子事务可以被有序的消费。KafkaConsumer 负责从 Kafak 中读取事件并发送给集群分片器 ShardingCoordinator,ShardingCoordinator 负责在 Alpha 集群中创建 SagaActor 并发送这个消息。运行中的 SagaActor 接收到消息后会持久化到 Redis 中,当这个集群中的节点奔溃后可以在集群其他节点恢复 SagaActor 以及它的状态。当 SagaActor 结束后就会将这一笔全局事务的数据存储到 ES。
+
+启动 Alpha 集群非常容易,首先启动集群需要用到的中间件 Kafka Redis PostgreSQL/MySQL ElasticSearch,你使用 Docker 启动他们(在生产环境建议使用一个更可靠的部署方式),下边提供了一个 docker compose 文件 servicecomb-pack-middleware.yml,你可以直接使用命令 `docker-compose -f servicecomb-pack-middleware.yml up -d` 启动它。
+
+```yaml
+version: '3.2'
+services:
+  postgres:
+    image: postgres:9.6
+    hostname: postgres
+    container_name: postgres
+    ports:
+      - '5432:5432'
+    environment:
+      - POSTGRES_DB=saga
+      - POSTGRES_USER=saga
+      - POSTGRES_PASSWORD=password
+
+  elasticsearch:
+    image: elasticsearch:6.6.2
+    hostname: elasticsearch
+    container_name: elasticsearch
+    environment:
+      - "ES_JAVA_OPTS=-Xmx256m -Xms256m"
+      - "discovery.type=single-node"
+      - "cluster.routing.allocation.disk.threshold_enabled=false"
+    ulimits:
+      memlock:
+        soft: -1
+        hard: -1
+    ports:
+      - 9200:9200
+      - 9300:9300
+
+  zookeeper:
+    image: coolbeevip/alpine-zookeeper:3.4.14
+    hostname: zookeeper
+    container_name: zookeeper    
+    ports:
+      - 2181:2181
+
+  kafka:
+    image: coolbeevip/alpine-kafka:2.2.1-2.12
+    hostname: kafka
+    container_name: kafka    
+    environment:
+      KAFKA_ADVERTISED_HOST_NAME: 10.50.8.3
+      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
+    ports:
+      - 9092:9092
+    links:
+      - zookeeper:zookeeper
+    depends_on:
+      - zookeeper   
+
+  redis:
+    image: redis:5.0.5-alpine
+    hostname: redis
+    container_name: redis
+    ports:
+      - 6379:6379   
+```
+
+**注意:** KAFKA_ADVERTISED_HOST_NAME 一定要配置成服务器的真实 IP 地址,不能配置成 127.0.0.1 或者 localhost
+
+然后我们启动一个具有两个 Alpha 节点的集群,因为我是在一台机器上启动两个节点,所以他们必须具备不同的端口
+
+* 端口规划
+
+  | 节点    | gRPC 端口 | REST 端口 | Gossip 端口 |
+  | ------- | --------- | --------- | ----------- |
+  | Alpha 1 | 8080      | 8090      | 8070        |
+  | Alpha 2 | 8081      | 8091      | 8071        |
+
+* 集群参数
+
+  | 参数名                                           | 说明                                                         |
+  | ------------------------------------------------ | ------------------------------------------------------------ |
+  | server.port                                      | REST 端口,默认值 8090                                       |
+  | alpha.server.port                                | gRPC 端口,默认值 8080                                       |
+  | akkaConfig.akka.remote.artery.canonical.port     | Gossip 端口,默认值 8070                                     |
+  | spring.kafka.bootstrap-server                    | Kafka 地址                                                   |
+  | akkaConfig.akka-persistence-redis.redis.host     | Redis Host IP                                                |
+  | akkaConfig.akka-persistence-redis.redis.port     | Redis Port                                                   |
+  | akkaConfig.akka-persistence-redis.redis.database | Redis Database                                               |
+  | akkaConfig.akka.cluster.seed-nodes[N]            | Gossip seed 节点地址,如果有多个 seed 节点,那么就写多行这个参数,每行的序号 N 从 0 开始采用递增方式 |
+  | spring.profiles.active                           | 必须填写 prd,cluster                                         |
+
+* 启动 Alpha 1
 
   ```bash
-  java -jar alpha-server-${version}-exec.jar \
+  java -jar alpha-server-0.6.0-SNAPSHOT-exec.jar \
     --server.port=8090 \
     --server.host=127.0.0.1 \
     --alpha.server.port=8080 \
     --alpha.feature.akka.enabled=true \
-    --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
+    --spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/saga?useSSL=false \
     --spring.datasource.username=saga \
     --spring.datasource.password=password \
     --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
@@ -186,15 +254,15 @@ Sub Transactions 面板:本事务包含的子事务ID,子事务状态,子
     --spring.profiles.active=prd,cluster
   ```
 
-  启动 Alpha 2
+* 启动 Alpha 2
 
   ```bash
-  java -jar alpha-server-${version}-exec.jar \
+  java -jar alpha-server-0.6.0-SNAPSHOT-exec.jar \
     --server.port=8091 \
     --server.host=127.0.0.1 \
     --alpha.server.port=8081 \
     --alpha.feature.akka.enabled=true \
-    --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
+    --spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/saga?useSSL=false \
     --spring.datasource.username=saga \
     --spring.datasource.password=password \
     --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
@@ -207,42 +275,20 @@ Sub Transactions 面板:本事务包含的子事务ID,子事务状态,子
     --spring.profiles.active=prd,cluster
   ```
 
-  集群参数说明
-
-  | 参数名                                                       | 默认值 | 说明                                       |
-  | ------------------------------------------------------------ | ------ | ------------------------------------------ |
-  | server.port                                                  | 8090   | REST 端口,每个节点唯一                    |
-  | alpha.server.port                                            | 8080   | GRPC 端口,每个节点唯一                    |
-  | spring.kafka.bootstrap-servers                               |        | Kafka 访问地址                             |
-| kafka.numPartitions                                          | 6      | Kafka 访问地址                             |
-  | akkaConfig.akka.remote.artery.canonical.port                 |        | Akka集群 端口,每个节点唯一                |
-  | akkaConfig.akka.cluster.seed-nodes[x]                        |        | Akka集群种子节点地址,每个种子节点配置一行 |
-  | akkaConfig.akka-persistence-redis.redis.host                 |        | Redis 服务地址                             |
-  | akkaConfig.akka-persistence-redis.redis.port                 |        | Redis 服务端口                             |
-  | akkaConfig.akka-persistence-redis.redis.database             | 0      | Redis 库名                                 |
-  | alpha.feature.akka.transaction.repository.elasticsearch.batchSize | 100    | ES批量提交大小                             |
-  | alpha.feature.akka.transaction.repository.elasticsearch.refreshTime | 5000   | ES定时提交间隔                             |
-  | spring.profiles.active                                       |        | 激活配置,必须填写 prd,cluster             |
-
-## 高可用
-
-集群部署时当一个节点宕机,那么另外一个节点会自动接管宕机节点未结束的 Actor
-
-**注意:** Alpha 采用"至少一次"的方式从 Kafka 接收事物事件,所以请确保 Kafka 服务的可靠性
-
-**注意:** Alpha 状态机采用 Redis 存储当前状态,并在节点宕机后通过 Redis 在集群其他节点上恢复状态机,所以请确保 Redis 服务的可靠性
+## 动态扩容
 
-**注意:** `alpha.feature.akka.transaction.repository.elasticsearch.batchSize` 设置的批量提交ES参数默认是100,在数据可靠性要求较高的场景请将此参数设置为 0
+- Alpha 支持通过动态增加节点的的方式实现在线处理能力扩容
+- Alpha 默认创建的 Kafka Topic 分区数量是 6,也就是说 Alpha 集群节点大于6个时将不能再提升处理性能,你可以根据规划在初次启动的时候使用  `kafka.numPartitions` 参数修改自动创建的 Topic 分区数
 
-## 动态扩容
+## 附件
 
-Alpha 收到事件消息后会放入 Kafka,Alpha 集群中的所有节点从 Kafka 中消费数据并发送给状态机处理,默认创建的 Topic 分区数量是 6,当你部署的集群节点数大于 6 时,你可以通过 `kafka.numPartitions` 参数修改默认分区数
+[事件通道](eventchannel_zh.md)
 
-## 后续计划
+[持久化](persistence_zh.md)
 
-APIs 集成 Swagger
+[Akka 配置](akka_zh.md)
 
-## 附件
+[APIs](apis_zh.md)
 
 [设计文档](design_fsm_zh.md)
 
diff --git a/docs/fsm/fsm_manual_zh.md b/docs/fsm/fsm_manual_zh.md
index 7e81473..47db41e 100755
--- a/docs/fsm/fsm_manual_zh.md
+++ b/docs/fsm/fsm_manual_zh.md
@@ -1,15 +1,15 @@
-# 状态机模式使用手册
+# 状态机模式
 
 ServiceComb Pack 0.5.0 版本开始我们尝试使用状态机模型解决分布式事务中复杂的事件和状态关系,我们将 Alpha 看作一个可以记录每个全局事务不同状态的的盒子,Alpha 收到 Omega 发送的事务消息(全局事务启动、全局事务停止,全局事务失败,子事务启动,子事务停止,子事务失败等等)后完成一些动作(等待、补偿、超时)和状态切换。
 
-分布式事务的事件使我们面临很复杂的情况,我们希望可以通过一种DSL来清晰的定义状态机,并且能够解决状态机本身的持久化和分布式问题,再经过尝试后我们觉得 Akka FSM 是一个不错的选择。下面请跟我一起体验一下这个新功能。
+分布式事务的事件使我们面临很复杂的情况,我们希望可以通过一种DSL来清晰的定义状态机,并且能够解决状态机本身的持久化和分布式问题,再经过尝试后我们觉得 [Akka](https://github.com/akka/akka) 是一个不错的选择。下面请跟我一起体验一下这个新功能。
 
 ## 重大更新
 
 * 使用 Akka 状态机代替基于表扫描的状态判断
 * 性能提升一个数量级,事件吞吐量每秒1.8w+,全局事务处理量每秒1.2k+
 * 内置健康指标采集器,可清晰了解系统瓶颈
-* 通过多种数据通道适配实现高可用
+* 支持分布式集群
 * 向前兼容原有 gRPC 协议
 * 全新的可视化监控界面
 * 开放全新的 API
@@ -139,41 +139,109 @@ Sub Transactions 面板:本事务包含的子事务ID,子事务状态,子
 
 ## 集群
 
-可以通过部署多个 Alpha 实现处理能力的水平扩展和高可用,集群依赖 Kafka 服务。
-
-* 启动 Kafka,可以使用 docker compose 方式启动,以下是一个 compose 文件样例
-
-  ```yaml
-  version: '3.2'
-  services:
-    zookeeper:
-      image: coolbeevip/alpine-zookeeper
-      ports:
-        - 2181:2181
-    kafka:
-      image: coolbeevip/alpine-kafka
-      environment:
-        KAFKA_ADVERTISED_HOST_NAME: 192.168.1.10
-        KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
-      ports:
-        - 9092:9092
-      links:
-        - zookeeper
-  ```
-
-  **注意:** KAFKA_ADVERTISED_HOST_NAME 一定要配置成服务器的真实 IP 地址,不能配置成 127.0.0.1 或者 localhost
-
-* 启动两个 Alpha 节点  
-
-  启动 Alpha 1
+> 需要下载主干代码后自己编译 0.6.0 版本
+
+依赖 Kafka 和 Redis 我们可以部署一个具有分布式处理能力的 Alpha 集群。Alpha 集群基于 Akka Cluster Sharding 和 Akka Persistence 实现动态计算和故障自愈。
+
+![image-20190927150455006](assets/alpha-cluster-architecture.png)
+
+上边是 Alpha 集群的工作架构图,表示部署了两个 Alpha 节点,分别是 8070 和 8071(这两编号是 [Gossip](https://en.wikipedia.org/wiki/Gossip_protocol) 协议的通信端口)。Omega 消息被发送到 Kafka ,并使用 globalTxId 作为分区策略,这保证了同一个全局事务下的子事务可以被有序的消费。KafkaConsumer 负责从 Kafak 中读取事件并发送给集群分片器 ShardingCoordinator,ShardingCoordinator 负责在 Alpha 集群中创建 SagaActor 并发送这个消息。运行中的 SagaActor 接收到消息后会持久化到 Redis 中,当这个集群中的节点奔溃后可以在集群其他节点恢复 SagaActor 以及它的状态。当 SagaActor 结束后就会将这一笔全局事务的数据存储到 ES。
+
+启动 Alpha 集群非常容易,首先启动集群需要用到的中间件 Kafka Redis PostgreSQL/MySQL ElasticSearch,你使用 Docker 启动他们(在生产环境建议使用一个更可靠的部署方式),下边提供了一个 docker compose 文件 servicecomb-pack-middleware.yml,你可以直接使用命令 `docker-compose -f servicecomb-pack-middleware.yml up -d` 启动它。
+
+```yaml
+version: '3.2'
+services:
+  postgres:
+    image: postgres:9.6
+    hostname: postgres
+    container_name: postgres
+    ports:
+      - '5432:5432'
+    environment:
+      - POSTGRES_DB=saga
+      - POSTGRES_USER=saga
+      - POSTGRES_PASSWORD=password
+
+  elasticsearch:
+    image: elasticsearch:6.6.2
+    hostname: elasticsearch
+    container_name: elasticsearch
+    environment:
+      - "ES_JAVA_OPTS=-Xmx256m -Xms256m"
+      - "discovery.type=single-node"
+      - "cluster.routing.allocation.disk.threshold_enabled=false"
+    ulimits:
+      memlock:
+        soft: -1
+        hard: -1
+    ports:
+      - 9200:9200
+      - 9300:9300
+
+  zookeeper:
+    image: coolbeevip/alpine-zookeeper:3.4.14
+    hostname: zookeeper
+    container_name: zookeeper    
+    ports:
+      - 2181:2181
+
+  kafka:
+    image: coolbeevip/alpine-kafka:2.2.1-2.12
+    hostname: kafka
+    container_name: kafka    
+    environment:
+      KAFKA_ADVERTISED_HOST_NAME: 10.50.8.3
+      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
+    ports:
+      - 9092:9092
+    links:
+      - zookeeper:zookeeper
+    depends_on:
+      - zookeeper   
+
+  redis:
+    image: redis:5.0.5-alpine
+    hostname: redis
+    container_name: redis
+    ports:
+      - 6379:6379   
+```
+
+**注意:** KAFKA_ADVERTISED_HOST_NAME 一定要配置成服务器的真实 IP 地址,不能配置成 127.0.0.1 或者 localhost
+
+然后我们启动一个具有两个 Alpha 节点的集群,因为我是在一台机器上启动两个节点,所以他们必须具备不同的端口
+
+* 端口规划
+
+  | 节点    | gRPC 端口 | REST 端口 | Gossip 端口 |
+  | ------- | --------- | --------- | ----------- |
+  | Alpha 1 | 8080      | 8090      | 8070        |
+  | Alpha 2 | 8081      | 8091      | 8071        |
+
+* 集群参数
+
+  | 参数名                                           | 说明                                                         |
+  | ------------------------------------------------ | ------------------------------------------------------------ |
+  | server.port                                      | REST 端口,默认值 8090                                       |
+  | alpha.server.port                                | gRPC 端口,默认值 8080                                       |
+  | akkaConfig.akka.remote.artery.canonical.port     | Gossip 端口,默认值 8070                                     |
+  | spring.kafka.bootstrap-server                    | Kafka 地址                                                   |
+  | akkaConfig.akka-persistence-redis.redis.host     | Redis Host IP                                                |
+  | akkaConfig.akka-persistence-redis.redis.port     | Redis Port                                                   |
+  | akkaConfig.akka-persistence-redis.redis.database | Redis Database                                               |
+  | akkaConfig.akka.cluster.seed-nodes[N]            | Gossip seed 节点地址,如果有多个 seed 节点,那么就写多行这个参数,每行的序号 N 从 0 开始采用递增方式 |
+  | spring.profiles.active                           | 必须填写 prd,cluster                                         |
+
+* 启动 Alpha 1
 
   ```bash
-  java -jar alpha-server-${version}-exec.jar \
+  java -jar alpha-server-0.6.0-SNAPSHOT-exec.jar \
     --server.port=8090 \
     --server.host=127.0.0.1 \
     --alpha.server.port=8080 \
     --alpha.feature.akka.enabled=true \
-    --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
+    --spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/saga?useSSL=false \
     --spring.datasource.username=saga \
     --spring.datasource.password=password \
     --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
@@ -186,15 +254,15 @@ Sub Transactions 面板:本事务包含的子事务ID,子事务状态,子
     --spring.profiles.active=prd,cluster
   ```
 
-  启动 Alpha 2
+* 启动 Alpha 2
 
   ```bash
-  java -jar alpha-server-${version}-exec.jar \
+  java -jar alpha-server-0.6.0-SNAPSHOT-exec.jar \
     --server.port=8091 \
     --server.host=127.0.0.1 \
     --alpha.server.port=8081 \
     --alpha.feature.akka.enabled=true \
-    --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
+    --spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/saga?useSSL=false \
     --spring.datasource.username=saga \
     --spring.datasource.password=password \
     --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
@@ -207,42 +275,20 @@ Sub Transactions 面板:本事务包含的子事务ID,子事务状态,子
     --spring.profiles.active=prd,cluster
   ```
 
-  集群参数说明
-
-  | 参数名                                                       | 默认值 | 说明                                       |
-  | ------------------------------------------------------------ | ------ | ------------------------------------------ |
-  | server.port                                                  | 8090   | REST 端口,每个节点唯一                    |
-  | alpha.server.port                                            | 8080   | GRPC 端口,每个节点唯一                    |
-  | spring.kafka.bootstrap-servers                               |        | Kafka 访问地址                             |
-| kafka.numPartitions                                          | 6      | Kafka 访问地址                             |
-  | akkaConfig.akka.remote.artery.canonical.port                 |        | Akka集群 端口,每个节点唯一                |
-  | akkaConfig.akka.cluster.seed-nodes[x]                        |        | Akka集群种子节点地址,每个种子节点配置一行 |
-  | akkaConfig.akka-persistence-redis.redis.host                 |        | Redis 服务地址                             |
-  | akkaConfig.akka-persistence-redis.redis.port                 |        | Redis 服务端口                             |
-  | akkaConfig.akka-persistence-redis.redis.database             | 0      | Redis 库名                                 |
-  | alpha.feature.akka.transaction.repository.elasticsearch.batchSize | 100    | ES批量提交大小                             |
-  | alpha.feature.akka.transaction.repository.elasticsearch.refreshTime | 5000   | ES定时提交间隔                             |
-  | spring.profiles.active                                       |        | 激活配置,必须填写 prd,cluster             |
-
-## 高可用
-
-集群部署时当一个节点宕机,那么另外一个节点会自动接管宕机节点未结束的 Actor
-
-**注意:** Alpha 采用"至少一次"的方式从 Kafka 接收事物事件,所以请确保 Kafka 服务的可靠性
-
-**注意:** Alpha 状态机采用 Redis 存储当前状态,并在节点宕机后通过 Redis 在集群其他节点上恢复状态机,所以请确保 Redis 服务的可靠性
+## 动态扩容
 
-**注意:** `alpha.feature.akka.transaction.repository.elasticsearch.batchSize` 设置的批量提交ES参数默认是100,在数据可靠性要求较高的场景请将此参数设置为 0
+* Alpha 支持通过动态增加节点的的方式实现在线处理能力扩容
+* Alpha 默认创建的 Kafka Topic 分区数量是 6,也就是说 Alpha 集群节点大于6个时将不能再提升处理性能,你可以根据规划在初次启动的时候使用  `kafka.numPartitions` 参数修改自动创建的 Topic 分区数
 
-## 动态扩容
+## 附件
 
-Alpha 收到事件消息后会放入 Kafka,Alpha 集群中的所有节点从 Kafka 中消费数据并发送给状态机处理,默认创建的 Topic 分区数量是 6,当你部署的集群节点数大于 6 时,你可以通过 `kafka.numPartitions` 参数修改默认分区数
+[事件通道](eventchannel_zh.md)
 
-## 后续计划
+[事务数据持久化](persistence_zh.md)
 
-APIs 集成 Swagger
+[Akka 配置](akka_zh.md)
 
-## 附件
+[APIs](apis_zh.md)
 
 [设计文档](design_fsm_zh.md)
 
diff --git a/docs/fsm/how_to_use_fsm.md b/docs/fsm/how_to_use_fsm.md
deleted file mode 100644
index a10643e..0000000
--- a/docs/fsm/how_to_use_fsm.md
+++ /dev/null
@@ -1,522 +0,0 @@
-# Alpha With State Machine
-
-## Quick Start
-
-The state machine mode save completed transaction data to elasticsearch
-
-* run postgress
-
-  ```bash
-  docker run -d -e "POSTGRES_DB=saga" -e "POSTGRES_USER=saga" -e "POSTGRES_PASSWORD=password" -p 5432:5432 postgres
-  ```
-
-* run elasticsearch
-
-  ```bash
-  docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.6.2
-  ```
-
-* run Alpha
-  use  `alpha.feature.akka.enabled=true`  enabled state machine mode support
-
-  ```bash
-  java -jar alpha-server-${version}-exec.jar \
-    --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
-    --spring.datasource.username=saga \
-    --spring.datasource.password=password \
-    --spring.profiles.active=prd \
-    --alpha.feature.akka.enabled=true \
-    --alpha.feature.akka.transaction.repository.type=elasticsearch \
-    --spring.data.elasticsearch.cluster-name=docker-cluster \
-    --spring.data.elasticsearch.cluster-nodes=localhost:9300  
-  ```
-
-  **NOTE:**  `spring.data.elasticsearch.cluster-name` is elasticsearch cluster name, default is  `docker-cluster`  when run elasticsearch with docker,  you can query cluster name by  `curl http://localhost:9200/`  
-
-* Omega 
-
-  use  `alpha.feature.akka.enabled=true`  enabled state machine mode support
-
-  ```base
-  alpha.feature.akka.enabled=true
-  ```
-
-* WEB
-
-  Open http://localhost:8090/admin in browser,  [Screencast](https://youtu.be/ORoRkZeg8gA)
-
-  Dashboard
-
-  ![image-20190809122237766](assets/ui-dashboard.png)
-
-  Transactions List
-
-  ![image-20190809122324563](assets/ui-transactions-list.png)
-
-  Transaction Details - Successful
-
-  ![image-20190809122352852](assets/ui-transaction-details-successful.png)
-
-  Transaction Details - Compensated
-
-  ![image-20190809122516345](assets/ui-transaction-details-compensated.png)
-
-  Transaction Details - Failed
-
-  ![image-20190809122442186](assets/ui-transaction-details-failed.png)
-
-## APIs
-
-#### Metrics
-
-You can query Alpha metrics by RESTful API, Use the  `AlphaBenchmark` to simulate sending data and quickly experience this feature.
-
-For exapmle; 10 concurrencies and send 1000 global transactions
-
-```bash
-java -jar alpha-benchmark-0.5.0-SNAPSHOT-exec.jar --alpha.cluster.address=0.0.0.0:8080 --w=0 --n=1000 --c=10
-```
-
-Query metrics
-
-```bash
-curl http://localhost:8090/alpha/api/v1/metrics
-
-{
-  nodeType: "MASTER",
-  metrics: {
-    eventReceived: 8000,
-    eventAccepted: 8000,
-    eventRejected: 0,
-    eventAvgTime: 0,
-    actorReceived: 8000,
-    actorAccepted: 8000,
-    actorRejected: 0,
-    actorAvgTime: 0,
-    sagaBeginCounter: 1000,
-    sagaEndCounter: 1000,
-    sagaAvgTime: 9,
-    committed: 1000,
-    compensated: 0,
-    suspended: 0,
-    repositoryReceived: 1000,
-    repositoryAccepted: 1000,
-    repositoryRejected: 0,
-    repositoryAvgTime: 0.88
-  }
-}
-```
-
-description
-
-* eventReceived: number of gRPC events received
-* eventAccepted:  number of gRPC events accepted(events into the channel)
-* eventRejected:  number of gRPC events rejected
-* eventAvgTime: average elapsed time on events (milliseconds)
-* actorReceived: number of events received by actor
-* actorAccepted:  number of events accepted by actor
-* actorRejected: number of events rejected by actor
-* actorAvgTime: average elapsed time on actor (milliseconds)
-* sagaBeginCounter: saga global transaction start counter
-* sagaEndCounter: saga global transaction end counter
-* sagaAvgTime: average elapsed time on saga global transaction (milliseconds)
-* committed: number of committed
-* compensated: number of compensated
-* suspended: number of suspended
-* repositoryReceived: number of events received by the repository component
-* repositoryAccepted: number of events accepted by the repository component
-* repositoryRejected: number of events rejected by the repository component
-* repositoryAvgTime: average elapsed time on save data (milliseconds)
-
-#### Query Transaction
-
-* Query all transactions
-
-  ```bash
-  curl -X GET http://localhost:8090/alpha/api/v1/transaction?page=0&size=50
-  
-  {
-    "total": 2002,
-    "page": 0,
-    "size": 50,
-    "elapsed": 581,
-    "globalTransactions": [...]
-  }
-  ```
-
-  Request
-
-  * page
-
-  * size 
-
-  Response
-
-  * total 
-  * page
-  * size
-  * elapsed
-  * globalTransactions
-
-* Query transaction by globalTxId
-
-  ```bash
-  curl -X GET http://localhost:8090/alpha/api/v1/transaction/{globalTxId}
-  
-  {
-    "globalTxId": "e00a3bac-de6b-498f-99a4-c11d3087fd14",
-    "type": "SAGA",
-    "serviceName": "alpha-benchmark",
-    "instanceId": "alpha-benchmark-127.0.0.1",
-    "beginTime": 1564762932963,
-    "endTime": 1564762933197,
-    "state": "COMMITTED",
-    "subTxSize": 3,
-    "durationTime": 408,
-    "subTransactions": [...],
-    "events": [...]
-  }
-  ```
-
-  Request
-
-  * globalTxId: global transaction id
-
-  Response
-
-  * globalTxId: global transaction id
-  * type: SAGA or TCC
-  * serviceName: global transaction initiator service name
-  * instanceId: global transaction initiator instance id
-  * beginTime: global transaction start time
-  * endTime: global transaction end time
-  * state: global transaction final state (COMMITTED or COMPENSATED or SUSPENDED)
-  * subTxSize: number of sub-transaction
-  * durationTime
-  * subTransactions: sub-transaction list
-  * events: event list
-
-## Transactional Data Persistence
-
-Only the end of the transaction will be persisted to Elasticsearch, the transaction data in execution is persisted by Akka.
-
-The end transaction has the following
-
-* End of successfully: final state is COMMITTED
-
-* End of compensation: final state is COMPENSATED
-
-* End of abnormal: final state is SUSPENDED
-
-  The following situations can lead to an abnormal end
-
-  1. timeout
-  2. Alpha received an unexpected event, For example, Alpha receive TxEndedEvent before TxStartedEvent or did not receive any sub-transaction events before receiving SagaEndedEvent
-
-### Persistent Configuration
-
-| name                                                         | default | description                                                  |
-| ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
-| alpha.feature.akka.transaction.repository.type               |         | Default is not persistent,currently only supports the elasticsearch option |
-| alpha.feature.akka.transaction.repository.elasticsearch.memory.size | -1      | Persistence wait queue length, default is Integer.MAX        |
-| alpha.feature.akka.transaction.repository.elasticsearch.batchSize | 100     | Batch size                                                   |
-| alpha.feature.akka.transaction.repository.elasticsearch.refreshTime | 5000    | Refresh time                                                 |
-| spring.data.elasticsearch.cluster-name                       |         | ES集群名称                                                   |
-| spring.data.elasticsearch.cluster-nodes                      |         | El;asticsearch address, For example, ip:9300                 |
-
-### Elasticsearch Index Name
-Alpha will automatically create an index  `alpha_global_transaction` 
-
-### Query By Elasticsearch APIs  
-
-* Query all transactions
-
-  ```bash
-  curl http://localhost:9200/alpha_global_transaction/_search
-  ```
-
-* Query transaction by globalTxId
-
-  ```bash
-  curl -X POST http://localhost:9200/alpha_global_transaction/_search -H 'Content-Type: application/json' -d '
-  {
-    "query": {
-      "bool": {
-        "must": [{
-          "term": {
-            "globalTxId.keyword": "974d089a-5476-48ed-847a-1e338456809b"
-          }
-        }],
-        "must_not": [],
-        "should": []
-      }
-    },
-    "from": 0,
-    "size": 10,
-    "sort": [],
-    "aggs": {}
-  }'
-  ```
-
-* Result json data
-
-  ```json
-  {
-    "took": 17,
-    "timed_out": false,
-    "_shards": {
-      "total": 5,
-      "successful": 5,
-      "skipped": 0,
-      "failed": 0
-    },
-    "hits": {
-      "total": 4874,
-      "max_score": 1.0,
-      "hits": [{
-        "_index": "alpha_global_transaction",
-        "_type": "alpha_global_transaction_type",
-        "_id": "209791a0-34f4-40da-807e-9c5b8786dd61",
-        "_score": 1.0,
-        "_source": {
-          "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-          "type": "SAGA",
-          "serviceName": "alpha-benchmark",
-          "instanceId": "alpha-benchmark-127.0.0.1",
-          "beginTime": 1563982631298,
-          "endTime": 1563982631320,
-          "state": "COMMITTED",
-          "subTxSize": 3,
-          "durationTime": 22,
-          "subTransactions": [...],
-          "events": [...]
-        }
-      },{...}]
-    }
-  }
-  ```
-
-* Result data sample
-
-  ```json
-  {
-    "took": 17,
-    "timed_out": false,
-    "_shards": {
-      "total": 5,
-      "successful": 5,
-      "skipped": 0,
-      "failed": 0
-    },
-    "hits": {
-      "total": 4874,
-      "max_score": 1.0,
-      "hits": [{
-        "_index": "alpha_global_transaction",
-        "_type": "alpha_global_transaction_type",
-        "_id": "209791a0-34f4-40da-807e-9c5b8786dd61",
-        "_score": 1.0,
-        "_source": {
-          "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-          "type": "SAGA",
-          "serviceName": "alpha-benchmark",
-          "instanceId": "alpha-benchmark-127.0.0.1",
-          "beginTime": 1563982631298,
-          "endTime": 1563982631320,
-          "state": "COMMITTED",
-          "subTxSize": 3,
-          "durationTime": 22,
-          "subTransactions": [{
-            "localTxId": "03fe15b2-a070-4e55-9b5b-801c2181dd0a",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "beginTime": 1563982631308,
-            "endTime": 1563982631309,
-            "state": "COMMITTED",
-            "durationTime": 1
-          }, {
-            "localTxId": "923f83fd-0bce-4fac-8c89-ecbe7c5e9106",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "beginTime": 1563982631320,
-            "endTime": 1563982631320,
-            "state": "COMMITTED",
-            "durationTime": 0
-          }, {
-            "localTxId": "95821ce3-2202-4e55-9343-4e6a6519821f",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "beginTime": 1563982631309,
-            "endTime": 1563982631309,
-            "state": "COMMITTED",
-            "durationTime": 0
-          }],
-          "events": [{
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "createTime": 1563982631298,
-            "timeout": 0,
-            "type": "SagaStartedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "03fe15b2-a070-4e55-9b5b-801c2181dd0a",
-            "createTime": 1563982631299,
-            "compensationMethod": "service a",
-            "payloads": "AQE=",
-            "retryMethod": "",
-            "retries": 0,
-            "type": "TxStartedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "03fe15b2-a070-4e55-9b5b-801c2181dd0a",
-            "createTime": 1563982631301,
-            "type": "TxEndedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "95821ce3-2202-4e55-9343-4e6a6519821f",
-            "createTime": 1563982631302,
-            "compensationMethod": "service b",
-            "payloads": "AQE=",
-            "retryMethod": "",
-            "retries": 0,
-            "type": "TxStartedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "95821ce3-2202-4e55-9343-4e6a6519821f",
-            "createTime": 1563982631304,
-            "type": "TxEndedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "923f83fd-0bce-4fac-8c89-ecbe7c5e9106",
-            "createTime": 1563982631309,
-            "compensationMethod": "service c",
-            "payloads": "AQE=",
-            "retryMethod": "",
-            "retries": 0,
-            "type": "TxStartedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "923f83fd-0bce-4fac-8c89-ecbe7c5e9106",
-            "createTime": 1563982631311,
-            "type": "TxEndedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "createTime": 1563982631312,
-            "type": "SagaEndedEvent"
-          }]
-        }
-      }]
-    }
-  }
-  ```
-
-  more references [Elasticsearch APIs](https://www.elastic.co/guide/en/elasticsearch/reference/6.6/docs.html) 
-
-## High Availability
-
-You can achieve high availability of services by deploying an Alpha cluster. You can choose the type of event channel by parameter itself.
-
-### Event Channel Type
-
-Alpha receives the event sent by Omega and puts it into the event channel to wait for Akka processing.
-
-| Type                 | 模式         | description                                                  |
-| -------------------- | ------------ | ------------------------------------------------------------ |
-| memory(default)    | single       | Using memory as data channel, **Not recommended for use in production environments** |
-| redis(coming soon) | master-slave | Using redis PUB/SUB as data channel. Only the primary node is responsible for processing the data, After the master node is down, the slave node switches to the master node. |
-| kafka(coming soon) | cluster      | Using Kafka as the data channel and global transaction ID as the partitioning strategy, support horizontally scalable. |
-
-* Memory channel
-
-| name                                   | default | description                        |
-| -------------------------------------- | ------- | ---------------------------------- |
-| alpha.feature.akka.channel.type        | memory  |                                    |
-| alpha.feature.akka.channel.memory.size | -1      | queue size, default is Integer.MAX |
-
-* Redis channel
-
-  coming soon
-
-* Kafka channel
-
-  coming soon
-
-### Akka Configuration
-
-Use the prefix `akkaConfig` before the parameter name of akka
-
-### Akka Persistence
-
-* Default
-
-```properties
-akkaConfig.akka.persistence.journal.plugin=akka.persistence.journal.inmem
-akkaConfig.akka.persistence.journal.leveldb.dir=target/example/journal
-akkaConfig.akka.persistence.snapshot-store.plugin=akka.persistence.snapshot-store.local
-akkaConfig.akka.persistence.snapshot-store.local.dir=target/example/snapshots
-```
-
-* Redis
-
-```properties
-akkaConfig.akka.persistence.journal.plugin=akka-persistence-redis.journal
-akkaConfig.akka.persistence.snapshot-store.plugin=akka-persistence-redis.snapshot
-akkaConfig.akka-persistence-redis.redis.mode=simple
-akkaConfig.akka-persistence-redis.redis.host=localhost
-akkaConfig.akka-persistence-redis.redis.port=6379
-akkaConfig.akka-persistence-redis.redis.database=0
-```
-
-more references [akka-persistence-redis](https://index.scala-lang.org/safety-data/akka-persistence-redis/akka-persistence-redis/0.4.0?target=_2.11)
-
-Usage example
-
-```bash
-java -jar alpha-server-${version}-exec.jar \
-  --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
-  --spring.datasource.username=saga-user \
-  --spring.datasource.password=saga-password \
-  --spring.profiles.active=prd \
-  --alpha.feature.akka.enabled=true \
-  --alpha.feature.akka.transaction.repository.type=elasticsearch \
-  --spring.data.elasticsearch.cluster-name=docker-cluster \
-  --spring.data.elasticsearch.cluster-nodes=localhost:9300 \
-  --akkaConfig.akka.persistence.journal.plugin=akka-persistence-redis.journal \
-  --akkaConfig.akka.persistence.snapshot-store.plugin=akka-persistence-redis.snapshot \
-  --akkaConfig.akka-persistence-redis.redis.mode=simple \
-  --akkaConfig.akka-persistence-redis.redis.host=localhost \
-  --akkaConfig.akka-persistence-redis.redis.port=6379 \
-  --akkaConfig.akka-persistence-redis.redis.database=0  
-```
-
-### Akka Cluster
-
-coming soon
-
-## Appendix
-
-[design document](design_fsm_zh.md)
-
-[benchmark](benchmark_zh.md)
\ No newline at end of file
diff --git a/docs/fsm/how_to_use_fsm_zh.md b/docs/fsm/how_to_use_fsm_zh.md
deleted file mode 100644
index cc0111e..0000000
--- a/docs/fsm/how_to_use_fsm_zh.md
+++ /dev/null
@@ -1,527 +0,0 @@
-# Alpha 状态机模式
-
-## 快速开始
-
-状态机模式使用 Elasticsearch 存储已结束的事务数据
-
-* 启动 Postgress
-
-  ```bash
-  docker run -d -e "POSTGRES_DB=saga" -e "POSTGRES_USER=saga" -e "POSTGRES_PASSWORD=password" -p 5432:5432 postgres
-  ```
-
-* 启动 Elasticsearch
-
-  ```bash
-  docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.6.2
-  ```
-
-* 启动 Alpha
-  使用 `alpha.feature.akka.enabled=true` 开启状态机模式
-
-  ```bash
-  java -jar alpha-server-${version}-exec.jar \
-    --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
-    --spring.datasource.username=saga \
-    --spring.datasource.password=password \
-    --spring.profiles.active=prd \
-    --alpha.feature.akka.enabled=true \
-    --alpha.feature.akka.transaction.repository.type=elasticsearch \
-    --spring.data.elasticsearch.cluster-name=docker-cluster \
-    --spring.data.elasticsearch.cluster-nodes=localhost:9300  
-  ```
-
-  更多持久化参数参见 "事务数据持久化" 说明
-
-  **注意:** 参数 `spring.data.elasticsearch.cluster-name` 设置的是 Elasticsearch 集群名称,使用 docker 启动 Elasticsearch 默认集群名称是 `docker-cluster` , 你可以使用 `curl http://localhost:9200/` 命令查询
-
-* Omega 侧配置
-
-  使用 `alpha.feature.akka.enabled=true` 开启状态机模式
-
-  ```base
-  alpha.feature.akka.enabled=true
-  ```
-
-* WEB管理界面
-
-  在浏览器中打开 http://localhost:8090/admin,[视频截屏](https://youtu.be/ORoRkZeg8gA)
-
-  仪表盘
-
-  ![image-20190809122237766](assets/ui-dashboard.png)
-
-  事务列表
-
-  ![image-20190809122324563](assets/ui-transactions-list.png)
-
-  事务明细-成功
-
-  ![image-20190809122352852](assets/ui-transaction-details-successful.png)
-
-  事务明细-补偿
-
-  ![image-20190809122516345](assets/ui-transaction-details-compensated.png)
-
-  事务明细-失败
-
-  ![image-20190809122442186](assets/ui-transaction-details-failed.png)
-
-## APIs
-
-#### 性能度量
-
-你可以使用 API 查询 Alpha 的性能指标,你可以使用基准测试工具 `AlphaBenchmark` 模拟发送数据后快速体验这一功能
-
-例如:使用以下命令模拟 10 并发,发送 1000 个全局事务
-
-```bash
-java -jar alpha-benchmark-0.5.0-SNAPSHOT-exec.jar --alpha.cluster.address=0.0.0.0:8080 --w=0 --n=1000 --c=10
-```
-
-查询性能指标
-
-```bash
-curl http://localhost:8090/alpha/api/v1/metrics
-
-{
-  nodeType: "MASTER",
-  metrics: {
-    eventReceived: 8000,
-    eventAccepted: 8000,
-    eventRejected: 0,
-    eventAvgTime: 0,
-    actorReceived: 8000,
-    actorAccepted: 8000,
-    actorRejected: 0,
-    actorAvgTime: 0,
-    sagaBeginCounter: 1000,
-    sagaEndCounter: 1000,
-    sagaAvgTime: 9,
-    committed: 1000,
-    compensated: 0,
-    suspended: 0,
-    repositoryReceived: 1000,
-    repositoryAccepted: 1000,
-    repositoryRejected: 0,
-    repositoryAvgTime: 0.88
-  }
-}
-```
-
-例如以上指标中显示 `sagaAvgTime: 9` 表示每个全局事务在Akka的处理耗时9毫秒,`repositoryAvgTime: 0.88` 表示每个全局事务入库耗时0.88毫秒
-
-指标说明
-
-* eventReceived: Alpha 收到的 gRPC 事件数量
-* eventAccepted:  Alpha 处理的 gRPC 事件数量(事件放入事件通道)
-* eventRejected:  Alpha 拒绝的 gRPC 事件数量
-* eventAvgTime: Alpha 平均耗时(毫秒)
-* actorReceived: Akka 收到的事件数量
-* actorAccepted:  Akka 处理的事件数量
-* actorRejected: Akka 拒绝的事件数量
-* actorAvgTime: Akka 平均耗时(毫秒)
-* sagaBeginCounter: 开始的 Saga 全局事务数量
-* sagaEndCounter: 结束的 Saga 全局事务数量
-* sagaAvgTime: 平均耗时(毫秒)
-* committed: COMMITTED状态的 Saga 全局事务数量
-* compensated: COMPENSATED状态的 Saga 全局事务数量
-* suspended: SUSPENDED状态的 Saga 的全局事务数量
-* repositoryReceived: 存储模块收到的全局事务数量
-* repositoryAccepted: 存储模块处理的全局事务数量
-* repositoryRejected: 存储模块拒绝的全局事务数量
-* repositoryAvgTime: 平均耗时(毫秒)
-
-#### 事务数据查询
-
-> 需要启动 Elasticsearch 存储事务
-
-* 查询事务列表
-
-  ```bash
-  curl -X GET http://localhost:8090/alpha/api/v1/transaction?page=0&size=50
-  
-  {
-    "total": 2002,
-    "page": 0,
-    "size": 50,
-    "elapsed": 581,
-    "globalTransactions": [...]
-  }
-  ```
-
-  请求参数
-
-  * page 页号
-
-  * size 返回行数
-
-  返回参数
-
-  * total 总行数
-  * page 本次查询结果页号
-  * size 本次查询行数
-  * elapsed 本次查询耗时(毫秒)
-  * globalTransactions 事件数据列表
-
-* 查询一条事务
-
-  ```bash
-  curl -X GET http://localhost:8090/alpha/api/v1/transaction/{globalTxId}
-  
-  {
-    "globalTxId": "e00a3bac-de6b-498f-99a4-c11d3087fd14",
-    "type": "SAGA",
-    "serviceName": "alpha-benchmark",
-    "instanceId": "alpha-benchmark-127.0.0.1",
-    "beginTime": 1564762932963,
-    "endTime": 1564762933197,
-    "state": "COMMITTED",
-    "subTxSize": 3,
-    "durationTime": 408,
-    "subTransactions": [...],
-    "events": [...]
-  }
-  ```
-
-  请求参数
-
-  * globalTxId 全局事务ID
-
-  返回参数
-
-  * globalTxId 全局事务ID
-  * type 事务类型,目前只有SAGA,后期增加TCC
-  * serviceName 全局事务发起方服务名称
-  * instanceId 全局事务发起方实例ID
-  * beginTime 事务开始时间
-  * endTime 事务结束时间
-  * state 事务最终状态
-  * subTxSize 包含子事务数量
-  * durationTime 全局事务处理耗时
-  * subTransactions 子事务数据列表
-  * events 事件列表
-
-## 事务数据持久化
-
-只有结束的事务才会被持久化到 Elasticsearch,执行中的事务数据通过Akka持久化。事务结束状态有以下几种
-
-* 事务成功结束,最后状态为 COMMITTED
-
-* 事务补偿后结束,最后状态为 COMPENSATED
-
-* 事务异常结束,最后状态为 SUSPENDED
-
-  导致事务异常结束有以下几种情况
-
-  1. 事务超时
-  2. Alpha收到了不符合预期的事件,例如在 收到 TxStartedEvent 前就收到了 TxEndedEvent,或者没有收到任何子事务事件就收到了 SagaEndedEvent等,这些规则都被定义在了有限状态机中。
-
-### 持久化参数
-
-| 参数名                                                       | 默认值 | 说明                                                         |
-| ------------------------------------------------------------ | ------ | ------------------------------------------------------------ |
-| alpha.feature.akka.transaction.repository.type               |        | 持久化类型,目前可选值 elasticsearch,如果不设置则不存储     |
-| alpha.feature.akka.transaction.repository.elasticsearch.memory.size | -1     | 持久化数据队列,默认 Integer.MAX. Actor会将终止的事务数据放入此队列,并等待存入elasticsearch |
-| alpha.feature.akka.transaction.repository.elasticsearch.batchSize | 100    | elasticsearch 批量入库数量                                   |
-| alpha.feature.akka.transaction.repository.elasticsearch.refreshTime | 5000   | elasticsearch 定时同步到ES时间                               |
-| spring.data.elasticsearch.cluster-name                       |        | ES集群名称                                                   |
-| spring.data.elasticsearch.cluster-nodes                      |        | ES节点地址,格式:localhost:9300,多个地址逗号分隔           |
-
-### Elasticsearch 索引
-Alpha 会在 Elasticsearch 中创建一个名为 `alpha_global_transaction` 的索引
-
-### 使用 Elasticsearch APIs 查询事务数据
-
-* 查询所有事务
-
-  ```bash
-  curl http://localhost:9200/alpha_global_transaction/_search
-  ```
-
-* 查询匹配 globalTxId 的事务
-
-  ```bash
-  curl -X POST http://localhost:9200/alpha_global_transaction/_search -H 'Content-Type: application/json' -d '
-  {
-    "query": {
-      "bool": {
-        "must": [{
-          "term": {
-            "globalTxId.keyword": "974d089a-5476-48ed-847a-1e338456809b"
-          }
-        }],
-        "must_not": [],
-        "should": []
-      }
-    },
-    "from": 0,
-    "size": 10,
-    "sort": [],
-    "aggs": {}
-  }'
-  ```
-
-* 查询返回 JSON 格式
-
-  ```json
-  {
-    "took": 17,
-    "timed_out": false,
-    "_shards": {
-      "total": 5,
-      "successful": 5,
-      "skipped": 0,
-      "failed": 0
-    },
-    "hits": {
-      "total": 4874,
-      "max_score": 1.0,
-      "hits": [{
-        "_index": "alpha_global_transaction",
-        "_type": "alpha_global_transaction_type",
-        "_id": "209791a0-34f4-40da-807e-9c5b8786dd61",
-        "_score": 1.0,
-        "_source": {
-          "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-          "type": "SAGA",
-          "serviceName": "alpha-benchmark",
-          "instanceId": "alpha-benchmark-127.0.0.1",
-          "beginTime": 1563982631298,
-          "endTime": 1563982631320,
-          "state": "COMMITTED",
-          "subTxSize": 3,
-          "durationTime": 22,
-          "subTransactions": [...],
-          "events": [...]
-        }
-      },{...}]
-    }
-  }
-  ```
-
-* 查询返回 JSON样例
-
-  ```json
-  {
-    "took": 17,
-    "timed_out": false,
-    "_shards": {
-      "total": 5,
-      "successful": 5,
-      "skipped": 0,
-      "failed": 0
-    },
-    "hits": {
-      "total": 4874,
-      "max_score": 1.0,
-      "hits": [{
-        "_index": "alpha_global_transaction",
-        "_type": "alpha_global_transaction_type",
-        "_id": "209791a0-34f4-40da-807e-9c5b8786dd61",
-        "_score": 1.0,
-        "_source": {
-          "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-          "type": "SAGA",
-          "serviceName": "alpha-benchmark",
-          "instanceId": "alpha-benchmark-127.0.0.1",
-          "beginTime": 1563982631298,
-          "endTime": 1563982631320,
-          "state": "COMMITTED",
-          "subTxSize": 3,
-          "durationTime": 22,
-          "subTransactions": [{
-            "localTxId": "03fe15b2-a070-4e55-9b5b-801c2181dd0a",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "beginTime": 1563982631308,
-            "endTime": 1563982631309,
-            "state": "COMMITTED",
-            "durationTime": 1
-          }, {
-            "localTxId": "923f83fd-0bce-4fac-8c89-ecbe7c5e9106",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "beginTime": 1563982631320,
-            "endTime": 1563982631320,
-            "state": "COMMITTED",
-            "durationTime": 0
-          }, {
-            "localTxId": "95821ce3-2202-4e55-9343-4e6a6519821f",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "beginTime": 1563982631309,
-            "endTime": 1563982631309,
-            "state": "COMMITTED",
-            "durationTime": 0
-          }],
-          "events": [{
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "createTime": 1563982631298,
-            "timeout": 0,
-            "type": "SagaStartedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "03fe15b2-a070-4e55-9b5b-801c2181dd0a",
-            "createTime": 1563982631299,
-            "compensationMethod": "service a",
-            "payloads": "AQE=",
-            "retryMethod": "",
-            "retries": 0,
-            "type": "TxStartedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "03fe15b2-a070-4e55-9b5b-801c2181dd0a",
-            "createTime": 1563982631301,
-            "type": "TxEndedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "95821ce3-2202-4e55-9343-4e6a6519821f",
-            "createTime": 1563982631302,
-            "compensationMethod": "service b",
-            "payloads": "AQE=",
-            "retryMethod": "",
-            "retries": 0,
-            "type": "TxStartedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "95821ce3-2202-4e55-9343-4e6a6519821f",
-            "createTime": 1563982631304,
-            "type": "TxEndedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "923f83fd-0bce-4fac-8c89-ecbe7c5e9106",
-            "createTime": 1563982631309,
-            "compensationMethod": "service c",
-            "payloads": "AQE=",
-            "retryMethod": "",
-            "retries": 0,
-            "type": "TxStartedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "923f83fd-0bce-4fac-8c89-ecbe7c5e9106",
-            "createTime": 1563982631311,
-            "type": "TxEndedEvent"
-          }, {
-            "serviceName": "alpha-benchmark",
-            "instanceId": "alpha-benchmark-127.0.0.1",
-            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "localTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
-            "createTime": 1563982631312,
-            "type": "SagaEndedEvent"
-          }]
-        }
-      }]
-    }
-  }
-  ```
-
-  更多用法参考 [Elasticsearch APIs](https://www.elastic.co/guide/en/elasticsearch/reference/6.6/docs.html) 
-
-## 高可用
-
-可以通过部署 Alpha 集群实现服务的高可用,你可以通过参数自己选择事件通道的类型
-
-### 事件通道类型
-
-Alpha 收到 Omeag 发送的事件后放入事件通道等待Akka处理
-
-| 通道类型             | 模式 | 说明                                                         |
-| -------------------- | ---- | ------------------------------------------------------------ |
-| memory(默认)       | 单例 | 使用内存作为数据通道,不建议在生产环境使用                   |
-| redis(coming soon) | 主从 | 使用 Redis PUB/SUB 作为数据通道,集群中的主节点负责处理数据,从节点处于就绪状态,主节点宕机后从节点接管主节点 |
-| kafka(coming soon) | 集群 | 使用 Kafka 作为数据通道,使用全局事务ID作为分区策略,集群中的所有节点同时工作,可水平扩展 |
-
-
- 可以使用参数 `alpha.feature.akka.channel.type` 配置通道类型
-
-* Memory channel
-
-| 参数名                                 | 默认值 | 说明                                        |
-| -------------------------------------- | ------ | ------------------------------------------- |
-| alpha.feature.akka.channel.type        | memory | 可选类型有 activemq, kafka, redis           |
-| alpha.feature.akka.channel.memory.size | -1     | momory类型时内存队列大小,-1表示Integer.MAX |
-
-* Redis channel
-
-  coming soon
-
-* Kafka channel
-
-  coming soon
-
-### Akka 参数配置
-
-可以通过 `akkaConfig.{akka_key} = value` 方式配置Akka参数,例如系统默认的基于内存模式的配置
-
-### Akka 持久化
-
-```properties
-akkaConfig.akka.persistence.journal.plugin=akka.persistence.journal.inmem
-akkaConfig.akka.persistence.journal.leveldb.dir=target/example/journal
-akkaConfig.akka.persistence.snapshot-store.plugin=akka.persistence.snapshot-store.local
-akkaConfig.akka.persistence.snapshot-store.local.dir=target/example/snapshots
-```
-
-你可以通过参数配置成基于 Redis 的持久化方式
-
-```properties
-akkaConfig.akka.persistence.journal.plugin=akka-persistence-redis.journal
-akkaConfig.akka.persistence.snapshot-store.plugin=akka-persistence-redis.snapshot
-akkaConfig.akka-persistence-redis.redis.mode=simple
-akkaConfig.akka-persistence-redis.redis.host=localhost
-akkaConfig.akka-persistence-redis.redis.port=6379
-akkaConfig.akka-persistence-redis.redis.database=0
-```
-
-更多参数请参考 [akka-persistence-redis](https://index.scala-lang.org/safety-data/akka-persistence-redis/akka-persistence-redis/0.4.0?target=_2.11)
-
-你可以在 Alpha 的启动命令中直接设置这些参数,例如
-
-```bash
-java -jar alpha-server-${version}-exec.jar \
-  --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
-  --spring.datasource.username=saga-user \
-  --spring.datasource.password=saga-password \
-  --spring.profiles.active=prd \
-  --alpha.feature.akka.enabled=true \
-  --alpha.feature.akka.transaction.repository.type=elasticsearch \
-  --spring.data.elasticsearch.cluster-name=docker-cluster \
-  --spring.data.elasticsearch.cluster-nodes=localhost:9300 \
-  --akkaConfig.akka.persistence.journal.plugin=akka-persistence-redis.journal \
-  --akkaConfig.akka.persistence.snapshot-store.plugin=akka-persistence-redis.snapshot \
-  --akkaConfig.akka-persistence-redis.redis.mode=simple \
-  --akkaConfig.akka-persistence-redis.redis.host=localhost \
-  --akkaConfig.akka-persistence-redis.redis.port=6379 \
-  --akkaConfig.akka-persistence-redis.redis.database=0  
-```
-
-### Akka 集群
-
-coming soon
-
-## 附录
-
-[设计文档](design_fsm_zh.md)
-
-[基准测试报告](benchmark_zh.md)
\ No newline at end of file
diff --git a/docs/fsm/persistence_zh.md b/docs/fsm/persistence_zh.md
new file mode 100644
index 0000000..9a0065a
--- /dev/null
+++ b/docs/fsm/persistence_zh.md
@@ -0,0 +1,235 @@
+# 事务数据持久化
+
+只有结束的事务才会被持久化到 Elasticsearch,执行中的事务数据通过Akka持久化。事务结束状态有以下几种
+
+- 事务成功结束,最后状态为 COMMITTED
+
+- 事务补偿后结束,最后状态为 COMPENSATED
+
+- 事务异常结束,最后状态为 SUSPENDED
+
+  导致事务异常结束有以下几种情况
+
+  1. 事务超时
+  2. Alpha收到了不符合预期的事件,例如在 收到 TxStartedEvent 前就收到了 TxEndedEvent,或者没有收到任何子事务事件就收到了 SagaEndedEvent等,这些规则都被定义在了有限状态机中。
+
+### 持久化参数
+
+| 参数名                                                       | 默认值 | 说明                                                         |
+| ------------------------------------------------------------ | ------ | ------------------------------------------------------------ |
+| alpha.feature.akka.transaction.repository.type               |        | 持久化类型,目前可选值 elasticsearch,如果不设置则不存储     |
+| alpha.feature.akka.transaction.repository.elasticsearch.batchSize | 100    | elasticsearch 批量入库数量                                   |
+| alpha.feature.akka.transaction.repository.elasticsearch.refreshTime | 5000   | elasticsearch 定时同步到ES时间                               |
+| spring.data.elasticsearch.cluster-name                       |        | ES集群名称                                                   |
+| spring.data.elasticsearch.cluster-nodes                      |        | ES节点地址,格式:localhost:9300,多个地址逗号分隔           |
+
+### Elasticsearch 索引
+
+Alpha 会在 Elasticsearch 中创建一个名为 `alpha_global_transaction` 的索引
+
+### 使用 Elasticsearch APIs 查询事务数据
+
+- 查询所有事务
+
+  ```bash
+  curl http://localhost:9200/alpha_global_transaction/_search
+  ```
+
+- 查询匹配 globalTxId 的事务
+
+  ```bash
+  curl -X POST http://localhost:9200/alpha_global_transaction/_search -H 'Content-Type: application/json' -d '
+  {
+    "query": {
+      "bool": {
+        "must": [{
+          "term": {
+            "globalTxId.keyword": "974d089a-5476-48ed-847a-1e338456809b"
+          }
+        }],
+        "must_not": [],
+        "should": []
+      }
+    },
+    "from": 0,
+    "size": 10,
+    "sort": [],
+    "aggs": {}
+  }'
+  ```
+
+- 查询返回 JSON 格式
+
+  ```json
+  {
+    "took": 17,
+    "timed_out": false,
+    "_shards": {
+      "total": 5,
+      "successful": 5,
+      "skipped": 0,
+      "failed": 0
+    },
+    "hits": {
+      "total": 4874,
+      "max_score": 1.0,
+      "hits": [{
+        "_index": "alpha_global_transaction",
+        "_type": "alpha_global_transaction_type",
+        "_id": "209791a0-34f4-40da-807e-9c5b8786dd61",
+        "_score": 1.0,
+        "_source": {
+          "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+          "type": "SAGA",
+          "serviceName": "alpha-benchmark",
+          "instanceId": "alpha-benchmark-127.0.0.1",
+          "beginTime": 1563982631298,
+          "endTime": 1563982631320,
+          "state": "COMMITTED",
+          "subTxSize": 3,
+          "durationTime": 22,
+          "subTransactions": [...],
+          "events": [...]
+        }
+      },{...}]
+    }
+  }
+  ```
+
+- 查询返回 JSON样例
+
+  ```json
+  {
+    "took": 17,
+    "timed_out": false,
+    "_shards": {
+      "total": 5,
+      "successful": 5,
+      "skipped": 0,
+      "failed": 0
+    },
+    "hits": {
+      "total": 4874,
+      "max_score": 1.0,
+      "hits": [{
+        "_index": "alpha_global_transaction",
+        "_type": "alpha_global_transaction_type",
+        "_id": "209791a0-34f4-40da-807e-9c5b8786dd61",
+        "_score": 1.0,
+        "_source": {
+          "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+          "type": "SAGA",
+          "serviceName": "alpha-benchmark",
+          "instanceId": "alpha-benchmark-127.0.0.1",
+          "beginTime": 1563982631298,
+          "endTime": 1563982631320,
+          "state": "COMMITTED",
+          "subTxSize": 3,
+          "durationTime": 22,
+          "subTransactions": [{
+            "localTxId": "03fe15b2-a070-4e55-9b5b-801c2181dd0a",
+            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "beginTime": 1563982631308,
+            "endTime": 1563982631309,
+            "state": "COMMITTED",
+            "durationTime": 1
+          }, {
+            "localTxId": "923f83fd-0bce-4fac-8c89-ecbe7c5e9106",
+            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "beginTime": 1563982631320,
+            "endTime": 1563982631320,
+            "state": "COMMITTED",
+            "durationTime": 0
+          }, {
+            "localTxId": "95821ce3-2202-4e55-9343-4e6a6519821f",
+            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "beginTime": 1563982631309,
+            "endTime": 1563982631309,
+            "state": "COMMITTED",
+            "durationTime": 0
+          }],
+          "events": [{
+            "serviceName": "alpha-benchmark",
+            "instanceId": "alpha-benchmark-127.0.0.1",
+            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "localTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "createTime": 1563982631298,
+            "timeout": 0,
+            "type": "SagaStartedEvent"
+          }, {
+            "serviceName": "alpha-benchmark",
+            "instanceId": "alpha-benchmark-127.0.0.1",
+            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "localTxId": "03fe15b2-a070-4e55-9b5b-801c2181dd0a",
+            "createTime": 1563982631299,
+            "compensationMethod": "service a",
+            "payloads": "AQE=",
+            "retryMethod": "",
+            "retries": 0,
+            "type": "TxStartedEvent"
+          }, {
+            "serviceName": "alpha-benchmark",
+            "instanceId": "alpha-benchmark-127.0.0.1",
+            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "localTxId": "03fe15b2-a070-4e55-9b5b-801c2181dd0a",
+            "createTime": 1563982631301,
+            "type": "TxEndedEvent"
+          }, {
+            "serviceName": "alpha-benchmark",
+            "instanceId": "alpha-benchmark-127.0.0.1",
+            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "localTxId": "95821ce3-2202-4e55-9343-4e6a6519821f",
+            "createTime": 1563982631302,
+            "compensationMethod": "service b",
+            "payloads": "AQE=",
+            "retryMethod": "",
+            "retries": 0,
+            "type": "TxStartedEvent"
+          }, {
+            "serviceName": "alpha-benchmark",
+            "instanceId": "alpha-benchmark-127.0.0.1",
+            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "localTxId": "95821ce3-2202-4e55-9343-4e6a6519821f",
+            "createTime": 1563982631304,
+            "type": "TxEndedEvent"
+          }, {
+            "serviceName": "alpha-benchmark",
+            "instanceId": "alpha-benchmark-127.0.0.1",
+            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "localTxId": "923f83fd-0bce-4fac-8c89-ecbe7c5e9106",
+            "createTime": 1563982631309,
+            "compensationMethod": "service c",
+            "payloads": "AQE=",
+            "retryMethod": "",
+            "retries": 0,
+            "type": "TxStartedEvent"
+          }, {
+            "serviceName": "alpha-benchmark",
+            "instanceId": "alpha-benchmark-127.0.0.1",
+            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "localTxId": "923f83fd-0bce-4fac-8c89-ecbe7c5e9106",
+            "createTime": 1563982631311,
+            "type": "TxEndedEvent"
+          }, {
+            "serviceName": "alpha-benchmark",
+            "instanceId": "alpha-benchmark-127.0.0.1",
+            "globalTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "parentTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "localTxId": "209791a0-34f4-40da-807e-9c5b8786dd61",
+            "createTime": 1563982631312,
+            "type": "SagaEndedEvent"
+          }]
+        }
+      }]
+    }
+  }
+  ```
+
+  更多用法参考 [Elasticsearch APIs](https://www.elastic.co/guide/en/elasticsearch/reference/6.6/docs.html) 
\ No newline at end of file
diff --git a/docs/user_guide.md b/docs/user_guide.md
index 34150f0..c927bb5 100644
--- a/docs/user_guide.md
+++ b/docs/user_guide.md
@@ -816,4 +816,4 @@ Alpha can be highly available by deploying multiple instances, enable cluster su
 
 ## Experiment
 
-[Alpha State Machine Mode](fsm/how_to_use_fsm.md)
\ No newline at end of file
+[State Machine Mode](fsm/fsm_manual.md)
\ No newline at end of file
diff --git a/docs/user_guide_zh.md b/docs/user_guide_zh.md
index 91ccf52..78f468b 100644
--- a/docs/user_guide_zh.md
+++ b/docs/user_guide_zh.md
@@ -802,4 +802,4 @@ Alpha 可以通过部署多实例的方式保证高可用,使用 `alpha.cluste
 
 ## 实验
 
-[Alpha 状态机模式](fsm/fsm_manual_zh.md)
\ No newline at end of file
+[状态机模式](fsm/fsm_manual_zh.md)
\ No newline at end of file


[servicecomb-pack] 24/42: SCB-1368 Modify ES default batchSize 100

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 235032b7b3c35c5d2d75d7388f11dea773901074
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 21 02:18:00 2019 +0800

    SCB-1368 Modify ES default batchSize 100
---
 .../pack/alpha/fsm/FsmAutoConfiguration.java       |  2 +-
 docs/fsm/fsm_manual_zh.md                          | 73 ++++++++--------------
 2 files changed, 26 insertions(+), 49 deletions(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java
index 777ae3f..97430ba 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java
@@ -55,7 +55,7 @@ import org.springframework.data.elasticsearch.core.ElasticsearchTemplate;
 @ConditionalOnProperty(value = {"alpha.feature.akka.enabled"})
 public class FsmAutoConfiguration {
 
-  @Value("${alpha.feature.akka.transaction.repository.elasticsearch.batchSize:1000}")
+  @Value("${alpha.feature.akka.transaction.repository.elasticsearch.batchSize:100}")
   int repositoryElasticsearchBatchSize;
 
   @Value("${alpha.feature.akka.transaction.repository.elasticsearch.refreshTime:5000}")
diff --git a/docs/fsm/fsm_manual_zh.md b/docs/fsm/fsm_manual_zh.md
index 6368e0e..7e81473 100755
--- a/docs/fsm/fsm_manual_zh.md
+++ b/docs/fsm/fsm_manual_zh.md
@@ -139,43 +139,7 @@ Sub Transactions 面板:本事务包含的子事务ID,子事务状态,子
 
 ## 集群
 
-参数
-
-| 参数名                     | 参数值 | 说明 |
-| -------------------------- | ------ | ---- |
-| server.port                | 8090   |      |
-| alpha.server.port          | 8080   |      |
-| alpha.feature.akka.enabled | true   |      |
-
-参数
-
-| 参数名                          | 参数值            | 说明 |
-| ------------------------------- | ----------------- | ---- |
-| alpha.feature.akka.channel.type | kafka             |      |
-| spring.kafka.bootstrap-servers  | 192.168.1.10:9092 |      |
-|                                 |                   |      |
-
-持久化参数
-
-| 参数名                                         | 参数值         | 说明 |
-| ---------------------------------------------- | -------------- | ---- |
-| alpha.feature.akka.transaction.repository.type | elasticsearch  |      |
-| spring.data.elasticsearch.cluster-name         | docker-cluster |      |
-| spring.data.elasticsearch.cluster-nodes        | localhost:9300 |      |
-
-Akka
-
-| 参数名                                            | 参数值                          | 说明 |
-| ------------------------------------------------- | ------------------------------- | ---- |
-| akkaConfig.akka.persistence.journal.plugin        | akka-persistence-redis.journal  |      |
-| akkaConfig.akka.persistence.snapshot-store.plugin | akka-persistence-redis.snapshot |      |
-| akkaConfig.akka-persistence-redis.redis.mode      | simple                          |      |
-| akkaConfig.akka-persistence-redis.redis.host      | localhost                       |      |
-| akkaConfig.akka-persistence-redis.redis.port      | 6379                            |      |
-| akkaConfig.akka-persistence-redis.redis.database  | 0                               |      |
-|                                                   |                                 |      |
-
-可以通过部署多个 Alpha 实现处理能力的水平扩展,集群依赖 Kafka 服务。
+可以通过部署多个 Alpha 实现处理能力的水平扩展和高可用,集群依赖 Kafka 服务。
 
 * 启动 Kafka,可以使用 docker compose 方式启动,以下是一个 compose 文件样例
 
@@ -208,6 +172,7 @@ Akka
     --server.port=8090 \
     --server.host=127.0.0.1 \
     --alpha.server.port=8080 \
+    --alpha.feature.akka.enabled=true \
     --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
     --spring.datasource.username=saga \
     --spring.datasource.password=password \
@@ -216,10 +181,8 @@ Akka
     --spring.data.elasticsearch.cluster-nodes=127.0.0.1:9300 \
     --akkaConfig.akka.remote.artery.canonical.port=8070 \
     --akkaConfig.akka.cluster.seed-nodes[0]="akka://alpha-cluster@127.0.0.1:8070" \
-    --akkaConfig.akka-persistence-redis.redis.mode=simple \
     --akkaConfig.akka-persistence-redis.redis.host=127.0.0.1 \
     --akkaConfig.akka-persistence-redis.redis.port=6379 \
-    --akkaConfig.akka-persistence-redis.redis.database=0 \
     --spring.profiles.active=prd,cluster
   ```
 
@@ -230,6 +193,7 @@ Akka
     --server.port=8091 \
     --server.host=127.0.0.1 \
     --alpha.server.port=8081 \
+    --alpha.feature.akka.enabled=true \
     --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
     --spring.datasource.username=saga \
     --spring.datasource.password=password \
@@ -238,30 +202,43 @@ Akka
     --spring.data.elasticsearch.cluster-nodes=127.0.0.1:9300 \
     --akkaConfig.akka.remote.artery.canonical.port=8071 \
     --akkaConfig.akka.cluster.seed-nodes[0]="akka://alpha-cluster@127.0.0.1:8070" \
-    --akkaConfig.akka-persistence-redis.redis.mode=simple \
     --akkaConfig.akka-persistence-redis.redis.host=127.0.0.1 \
     --akkaConfig.akka-persistence-redis.redis.port=6379 \
-    --akkaConfig.akka-persistence-redis.redis.database=0 \
     --spring.profiles.active=prd,cluster
   ```
 
   集群参数说明
 
-  server.port: REST 端口
+  | 参数名                                                       | 默认值 | 说明                                       |
+  | ------------------------------------------------------------ | ------ | ------------------------------------------ |
+  | server.port                                                  | 8090   | REST 端口,每个节点唯一                    |
+  | alpha.server.port                                            | 8080   | GRPC 端口,每个节点唯一                    |
+  | spring.kafka.bootstrap-servers                               |        | Kafka 访问地址                             |
+| kafka.numPartitions                                          | 6      | Kafka 访问地址                             |
+  | akkaConfig.akka.remote.artery.canonical.port                 |        | Akka集群 端口,每个节点唯一                |
+  | akkaConfig.akka.cluster.seed-nodes[x]                        |        | Akka集群种子节点地址,每个种子节点配置一行 |
+  | akkaConfig.akka-persistence-redis.redis.host                 |        | Redis 服务地址                             |
+  | akkaConfig.akka-persistence-redis.redis.port                 |        | Redis 服务端口                             |
+  | akkaConfig.akka-persistence-redis.redis.database             | 0      | Redis 库名                                 |
+  | alpha.feature.akka.transaction.repository.elasticsearch.batchSize | 100    | ES批量提交大小                             |
+  | alpha.feature.akka.transaction.repository.elasticsearch.refreshTime | 5000   | ES定时提交间隔                             |
+  | spring.profiles.active                                       |        | 激活配置,必须填写 prd,cluster             |
 
-  alpha.server.port: gRPC 端口
+## 高可用
 
-  alpha.feature.akka.channel.type: 数据通道类型配置成 kafka
+集群部署时当一个节点宕机,那么另外一个节点会自动接管宕机节点未结束的 Actor
 
-  spring.kafka.bootstrap-servers: kafka 地址,多个地址逗号分隔
+**注意:** Alpha 采用"至少一次"的方式从 Kafka 接收事物事件,所以请确保 Kafka 服务的可靠性
 
-  
+**注意:** Alpha 状态机采用 Redis 存储当前状态,并在节点宕机后通过 Redis 在集群其他节点上恢复状态机,所以请确保 Redis 服务的可靠性
 
+**注意:** `alpha.feature.akka.transaction.repository.elasticsearch.batchSize` 设置的批量提交ES参数默认是100,在数据可靠性要求较高的场景请将此参数设置为 0
 
+## 动态扩容
 
-## 后续计划
+Alpha 收到事件消息后会放入 Kafka,Alpha 集群中的所有节点从 Kafka 中消费数据并发送给状态机处理,默认创建的 Topic 分区数量是 6,当你部署的集群节点数大于 6 时,你可以通过 `kafka.numPartitions` 参数修改默认分区数
 
-Akka集群支持
+## 后续计划
 
 APIs 集成 Swagger
 


[servicecomb-pack] 08/42: SCB-1368 Add shard region selection Actor

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 10c54afaafc5e2048f9a6cc7c34340cf0f4c77f8
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Tue Sep 10 17:26:57 2019 +0800

    SCB-1368 Add shard region selection Actor
---
 .../servicecomb/pack/alpha/fsm/SagaActor.java      | 28 ++++++----
 .../pack/alpha/fsm/SagaShardRegionActor.java       | 65 ++++++++++++++++++++++
 2 files changed, 81 insertions(+), 12 deletions(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
index fc06255..ba4b0ff 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
@@ -59,13 +59,13 @@ public class SagaActor extends
     return Props.create(SagaActor.class, persistenceId);
   }
 
-  private final String persistenceId;
+  private String persistenceId;
 
   private long sagaBeginTime;
   private long sagaEndTime;
 
-  public SagaActor(String persistenceId) {
-    this.persistenceId = persistenceId;
+  public SagaActor() {
+    this.persistenceId = getSelf().path().name();
 
     startWith(SagaActorState.IDLE, SagaData.builder().build());
 
@@ -380,6 +380,12 @@ public class SagaActor extends
 
   @Override
   public SagaData applyEvent(DomainEvent event, SagaData data) {
+    if (this.recoveryRunning()) {
+      LOG.info("SagaActor recovery {}",event.getEvent());
+    }
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("SagaActor apply event {}", event.getEvent());
+    }
     // log event to SagaData
     if (event.getEvent() != null && !(event
         .getEvent() instanceof ComponsitedCheckEvent)) {
@@ -404,6 +410,7 @@ public class SagaActor extends
             .compensationMethod(domainEvent.getCompensationMethod())
             .payloads(domainEvent.getPayloads())
             .state(domainEvent.getState())
+            .beginTime(domainEvent.getEvent().getCreateTime())
             .build();
         data.getTxEntityMap().put(txEntity.getLocalTxId(), txEntity);
       } else {
@@ -412,7 +419,7 @@ public class SagaActor extends
     } else if (event instanceof UpdateTxEventDomain) {
       UpdateTxEventDomain domainEvent = (UpdateTxEventDomain) event;
       TxEntity txEntity = data.getTxEntityMap().get(domainEvent.getLocalTxId());
-      txEntity.setEndTime(new Date());
+      txEntity.setEndTime(domainEvent.getEvent().getCreateTime());
       if (domainEvent.getState() == TxState.COMMITTED) {
         txEntity.setState(domainEvent.getState());
       } else if (domainEvent.getState() == TxState.FAILED) {
@@ -441,27 +448,24 @@ public class SagaActor extends
           }
         });
       } else if (domainEvent.getState() == SagaActorState.SUSPENDED) {
-        data.setEndTime(new Date());
+        data.setEndTime(event.getEvent().getCreateTime());
         data.setTerminated(true);
         data.setSuspendedType(domainEvent.getSuspendedType());
       } else if (domainEvent.getState() == SagaActorState.COMPENSATED) {
-        data.setEndTime(new Date());
+        data.setEndTime(event.getEvent().getCreateTime());
         data.setTerminated(true);
       } else if (domainEvent.getState() == SagaActorState.COMMITTED) {
-        data.setEndTime(new Date());
+        data.setEndTime(event.getEvent().getCreateTime());
         data.setTerminated(true);
       }
     }
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("applyEvent: {} {}", stateName(), stateData().getGlobalTxId());
-    }
     return data;
   }
 
   @Override
   public void onRecoveryCompleted() {
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("onRecoveryCompleted: {} {}", stateName(), stateData().getGlobalTxId());
+    if(stateName() != SagaActorState.IDLE){
+      LOG.info("SagaActor {} recovery completed, state={}", stateData().getGlobalTxId(), stateName());
     }
   }
 
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
new file mode 100644
index 0000000..d43ba85
--- /dev/null
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
@@ -0,0 +1,65 @@
+package org.apache.servicecomb.pack.alpha.fsm;
+
+import akka.actor.AbstractActor;
+import akka.actor.ActorRef;
+import akka.actor.ActorSystem;
+import akka.actor.Props;
+import akka.cluster.sharding.ClusterSharding;
+import akka.cluster.sharding.ClusterShardingSettings;
+import akka.cluster.sharding.ShardRegion;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
+
+public class SagaShardRegionActor extends AbstractActor {
+
+  private final ActorRef workerRegion;
+
+  static ShardRegion.MessageExtractor messageExtractor = new ShardRegion.MessageExtractor() {
+    @Override
+    public String entityId(Object message) {
+      if (message instanceof BaseEvent) {
+        return ((BaseEvent) message).getGlobalTxId();
+      } else {
+        return null;
+      }
+    }
+
+    @Override
+    public Object entityMessage(Object message) {
+      return message;
+    }
+
+    @Override
+    public String shardId(Object message) {
+      int numberOfShards = 100;
+      if (message instanceof BaseEvent) {
+        String actorId = ((BaseEvent) message).getGlobalTxId();
+        return String.valueOf(actorId.hashCode() % numberOfShards);
+      } else if (message instanceof ShardRegion.StartEntity) {
+        String actorId = ((ShardRegion.StartEntity) message).entityId();
+        return String.valueOf(actorId.hashCode() % numberOfShards);
+      } else {
+        return null;
+      }
+    }
+  };
+
+  public SagaShardRegionActor() {
+    ActorSystem system = getContext().getSystem();
+    ClusterShardingSettings settings = ClusterShardingSettings.create(system);
+    workerRegion = ClusterSharding.get(system)
+        .start(
+            "saga-actor",
+            Props.create(SagaActor.class),
+            settings,
+            messageExtractor);
+  }
+
+  @Override
+  public Receive createReceive() {
+    return receiveBuilder()
+        .matchAny(msg -> {
+          workerRegion.tell(msg, getSelf());
+        })
+        .build();
+  }
+}


[servicecomb-pack] 17/42: SCB-1368 Kafka at-least-once delivery

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit f3472e03936ab782806234a942d0f1a2d21e9c8d
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 21 00:51:38 2019 +0800

    SCB-1368 Kafka at-least-once delivery
---
 .../fsm/channel/kafka/KafkaSagaEventConsumer.java  | 41 ++++++++++++----------
 1 file changed, 23 insertions(+), 18 deletions(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java
index b816302..7790c12 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java
@@ -19,14 +19,13 @@ package org.apache.servicecomb.pack.alpha.fsm.channel.kafka;
 
 import akka.actor.ActorRef;
 import akka.actor.ActorSystem;
-import akka.kafka.CommitterSettings;
+import akka.kafka.ConsumerMessage;
 import akka.kafka.ConsumerSettings;
 import akka.kafka.Subscriptions;
-import akka.kafka.javadsl.Committer;
 import akka.kafka.javadsl.Consumer;
 import akka.stream.ActorMaterializer;
 import akka.stream.Materializer;
-import akka.stream.javadsl.Keep;
+import akka.stream.javadsl.Sink;
 import com.fasterxml.jackson.databind.ObjectMapper;
 import com.typesafe.config.Config;
 import java.lang.invoke.MethodHandles;
@@ -51,6 +50,7 @@ public class KafkaSagaEventConsumer extends AbstractEventConsumer {
       MetricsService metricsService, String bootstrap_servers, String topic) {
     super(actorSystem, sagaShardRegionActor, metricsService);
 
+
     // init consumer
     final Materializer materializer = ActorMaterializer.create(actorSystem);
     final Config consumerConfig = actorSystem.settings().config().getConfig("akka.kafka.consumer");
@@ -60,37 +60,42 @@ public class KafkaSagaEventConsumer extends AbstractEventConsumer {
             .withBootstrapServers(bootstrap_servers)
             .withGroupId(groupId)
             .withProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false")
-            .withProperty(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "5000")
             .withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
             .withProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "StringDeserializer.class")
-            .withProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
-                "StringDeserializer.class");
-    CommitterSettings committerSettings = CommitterSettings.create(consumerConfig);
+            .withProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "StringDeserializer.class");
     Consumer.committableSource(consumerSettings, Subscriptions.topics(topic))
-        .mapAsync(1, event -> { // must be set to 1 for ordered
-          return sendSagaActor(event.record().key(), event.record().value())
-              .thenApply(done -> event.committableOffset());
+        .mapAsync(10, event -> {
+          BaseEvent bean = jsonMapper.readValue(event.record().value(), BaseEvent.class);
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("kafka receive {} {}", bean.getGlobalTxId(), bean.getType());
+          }
+          return sendSagaActor(bean).thenApply(done -> event.committableOffset());
         })
-        .toMat(Committer.sink(committerSettings), Keep.both())
-        .mapMaterializedValue(Consumer::createDrainingControl)
+        .batch(
+            100,
+            ConsumerMessage::createCommittableOffsetBatch,
+            ConsumerMessage.CommittableOffsetBatch::updated
+        )
+        .mapAsync(10, offset -> offset.commitJavadsl())
+        .to(Sink.ignore())
         .run(materializer);
   }
 
-  private CompletionStage<String> sendSagaActor(String key, String value) {
+  private CompletionStage<String> sendSagaActor(BaseEvent event) {
     try {
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("key {}, value {}", key, value);
-      }
       long begin = System.currentTimeMillis();
       metricsService.metrics().doActorReceived();
-      sagaShardRegionActor.tell(jsonMapper.readValue(value, BaseEvent.class), sagaShardRegionActor);
+      sagaShardRegionActor.tell(event, sagaShardRegionActor);
       long end = System.currentTimeMillis();
       metricsService.metrics().doActorAccepted();
       metricsService.metrics().doActorAvgTime(end - begin);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("send saga actor {} {}", event, event.getType());
+      }
       return CompletableFuture.completedFuture("");
     } catch (Exception ex) {
+      LOG.error(ex.getMessage(),ex);
       metricsService.metrics().doActorRejected();
-      LOG.error("key {}, value {}", key, value);
       throw new CompletionException(ex);
     }
   }


[servicecomb-pack] 04/42: SCB-1368 Refactoring model alpha-fsm-channel-kafka and alpha-fsm-channel-redis to alpha-fsm

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 001a26d1dd6d0f7d4fa0f094bb0d52e7162bb66a
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Tue Sep 10 17:03:07 2019 +0800

    SCB-1368 Refactoring model alpha-fsm-channel-kafka and alpha-fsm-channel-redis to alpha-fsm
---
 alpha/alpha-fsm-channel-kafka/README.md            |  28 ----
 alpha/alpha-fsm-channel-kafka/pom.xml              | 113 -------------
 .../kafka/KafkaChannelAutoConfiguration.java       | 175 ---------------------
 .../fsm/channel/kafka/KafkaMessageListener.java    |  49 ------
 .../src/main/resources/META-INF/spring.factories   |  17 --
 .../channel/kafka/test/KafkaActorEventSink.java    |  31 ----
 .../fsm/channel/kafka/test/KafkaApplication.java   |  40 -----
 .../fsm/channel/kafka/test/KafkaChannelTest.java   |  95 -----------
 .../src/test/resources/log4j2.xml                  |  30 ----
 alpha/alpha-fsm-channel-redis/README.md            |  17 --
 alpha/alpha-fsm-channel-redis/pom.xml              |  99 ------------
 .../src/main/resources/META-INF/spring.factories   |  17 --
 .../pack/alpha/fsm/RedisChannelTest.java           | 130 ---------------
 .../servicecomb/pack/alpha/fsm/RedisEventSink.java |  32 ----
 .../src/test/resources/log4j2.xml                  |  30 ----
 .../pack/alpha/fsm/FsmAutoConfiguration.java       |  73 ++-------
 .../fsm/channel/AbstractActorEventChannel.java     |   3 -
 .../alpha/fsm/channel/AbstractEventConsumer.java   |  20 +++
 .../fsm/channel/ActiveMQActorEventChannel.java     |  43 -----
 .../{ => kafka}/KafkaActorEventChannel.java        |  14 +-
 .../kafka/KafkaChannelAutoConfiguration.java       | 149 ++++++++++++++++++
 .../fsm/channel/kafka/KafkaMessagePublisher.java   |  14 +-
 .../fsm/channel/kafka/KafkaSagaEventConsumer.java  |  97 ++++++++++++
 .../{ => memory}/MemoryActorEventChannel.java      |  33 +---
 .../alpha/fsm/channel/redis/MessageSerializer.java |  27 ++--
 .../{ => redis}/RedisActorEventChannel.java        |  18 +--
 .../redis/RedisChannelAutoConfiguration.java       |  49 ++++--
 .../fsm/channel/redis/RedisMessagePublisher.java   |  15 +-
 .../fsm/channel/redis/RedisSagaEventConsumer.java} |  50 +++---
 alpha/pom.xml                                      |   2 -
 30 files changed, 391 insertions(+), 1119 deletions(-)

diff --git a/alpha/alpha-fsm-channel-kafka/README.md b/alpha/alpha-fsm-channel-kafka/README.md
deleted file mode 100644
index 094bbfe..0000000
--- a/alpha/alpha-fsm-channel-kafka/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# FSM kafka channel
-## Enabled Saga State Machine Module
-
-Using `alpha.feature.akka.enabled=true` launch Alpha and Omega Side 
-Using `alpha.feature.akka.channel.type=kafka` launch Alpha and Omega Side 
-
-```properties
-alpha.feature.akka.enabled=true
-alpha.feature.akka.channel.type=kafka
-```
-
-setting spring boot kafka
-```
-spring.kafka.bootstrap-servers=kafka bootstrap_servers 
-spring.kafka.consumer.group-id=kafka consumer group id, default servicecomb-pack
-alpha.feature.akka.channel.kafka.topic= kafka topic name, default servicecomb-pack-actor-event
-spring.kafka.producer.batch-size= producer batch size, default 16384
-spring.kafka.producer.retries = producer retries, default 0
-spring.kafka.producer.buffer.memory = producer buffer memory, default 33554432
-spring.kafka.consumer.auto.offset.reset = consumer auto offset reset, default earliest
-spring.kafka.consumer.enable.auto.commit = consumer enable auto commit, default false
-spring.kafka.consumer.auto.commit.interval.ms = consumer auto commit interval ms, default 100
-spring.kafka.listener.ackMode = consumer listener ack mode , default AckMode.MANUAL_IMMEDIATE
-spring.kafka.listener.pollTimeout = consumer listener pool timeout, default 1500 ms
-
-kafka.numPartitions = kafka topic partitions, default 6
-kafka.replicationFactor = kafka topic replication, default 1
-```
diff --git a/alpha/alpha-fsm-channel-kafka/pom.xml b/alpha/alpha-fsm-channel-kafka/pom.xml
deleted file mode 100644
index c7d4eff..0000000
--- a/alpha/alpha-fsm-channel-kafka/pom.xml
+++ /dev/null
@@ -1,113 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one or more
-  ~ contributor license agreements.  See the NOTICE file distributed with
-  ~ this work for additional information regarding copyright ownership.
-  ~ The ASF licenses this file to You under the Apache License, Version 2.0
-  ~ (the "License"); you may not use this file except in compliance with
-  ~ the License.  You may obtain a copy of the License at
-  ~
-  ~      http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing, software
-  ~ distributed under the License is distributed on an "AS IS" BASIS,
-  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  ~ See the License for the specific language governing permissions and
-  ~ limitations under the License.
-  -->
-
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <parent>
-    <artifactId>alpha</artifactId>
-    <groupId>org.apache.servicecomb.pack</groupId>
-    <version>0.6.0-SNAPSHOT</version>
-  </parent>
-  <modelVersion>4.0.0</modelVersion>
-
-  <artifactId>alpha-fsm-channel-kafka</artifactId>
-  <name>Pack::Alpha::Fsm::channel::kafka</name>
-
-  <properties>
-    <leveldbjni-all.version>1.8</leveldbjni-all.version>
-    <akka-persistence-redis.version>0.4.0</akka-persistence-redis.version>
-  </properties>
-
-  <dependencyManagement>
-    <dependencies>
-      <dependency>
-        <groupId>org.springframework.boot</groupId>
-        <artifactId>spring-boot-dependencies</artifactId>
-        <version>${spring.boot.version}</version>
-        <type>pom</type>
-        <scope>import</scope>
-      </dependency>
-      <dependency>
-        <groupId>com.typesafe.akka</groupId>
-        <artifactId>akka-persistence_2.12</artifactId>
-        <version>${akka.version}</version>
-      </dependency>
-    </dependencies>
-  </dependencyManagement>
-
-  <dependencies>
-    <!-- spring boot -->
-    <dependency>
-      <groupId>org.springframework.boot</groupId>
-      <artifactId>spring-boot-autoconfigure</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.servicecomb.pack</groupId>
-      <artifactId>pack-common</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.servicecomb.pack</groupId>
-      <artifactId>alpha-core</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>com.google.guava</groupId>
-      <artifactId>guava</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.springframework.boot</groupId>
-      <artifactId>spring-boot-starter-log4j2</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.logging.log4j</groupId>
-      <artifactId>log4j-api</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.logging.log4j</groupId>
-      <artifactId>log4j-core</artifactId>
-      <scope>test</scope>
-    </dependency>
-
-    <!-- For testing the artifacts scope are test-->
-    <dependency>
-      <groupId>org.springframework.boot</groupId>
-      <artifactId>spring-boot-starter-test</artifactId>
-      <exclusions>
-        <exclusion>
-          <groupId>org.springframework.boot</groupId>
-          <artifactId>spring-boot-starter-logging</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.springframework.kafka</groupId>
-      <artifactId>spring-kafka</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.springframework.kafka</groupId>
-      <artifactId>spring-kafka-test</artifactId>
-      <scope>test</scope>
-    </dependency>
-</dependencies>
-
-</project>
diff --git a/alpha/alpha-fsm-channel-kafka/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java b/alpha/alpha-fsm-channel-kafka/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java
deleted file mode 100644
index 6729be6..0000000
--- a/alpha/alpha-fsm-channel-kafka/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java
+++ /dev/null
@@ -1,175 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.servicecomb.pack.alpha.fsm.channel.kafka;
-
-import com.google.common.collect.Maps;
-import org.apache.kafka.clients.admin.AdminClientConfig;
-import org.apache.kafka.clients.admin.NewTopic;
-import org.apache.kafka.clients.consumer.ConsumerConfig;
-import org.apache.kafka.clients.producer.ProducerConfig;
-import org.apache.kafka.common.serialization.StringDeserializer;
-import org.apache.kafka.common.serialization.StringSerializer;
-import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.beans.factory.annotation.Qualifier;
-import org.springframework.beans.factory.annotation.Value;
-import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
-import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
-import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
-import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
-import org.springframework.context.annotation.Bean;
-import org.springframework.context.annotation.Configuration;
-import org.springframework.context.annotation.Lazy;
-import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
-import org.springframework.kafka.config.KafkaListenerContainerFactory;
-import org.springframework.kafka.core.*;
-import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
-import org.springframework.kafka.listener.ContainerProperties;
-import org.springframework.kafka.support.serializer.JsonDeserializer;
-import org.springframework.kafka.support.serializer.JsonSerializer;
-
-import java.util.Map;
-
-@Configuration
-@ConditionalOnClass(KafkaProperties.class)
-@ConditionalOnProperty(value = "alpha.feature.akka.channel.type", havingValue = "kafka")
-public class KafkaChannelAutoConfiguration {
-
-    private static final Logger logger = LoggerFactory.getLogger(KafkaChannelAutoConfiguration.class);
-
-    @Value("${alpha.feature.akka.channel.kafka.topic:servicecomb-pack-actor-event}")
-    private String topic;
-
-    @Value("${spring.kafka.bootstrap-servers}")
-    private String bootstrap_servers;
-
-    @Value("${spring.kafka.consumer.group-id:servicecomb-pack}")
-    private String groupId;
-
-    @Value("${spring.kafka.consumer.properties.spring.json.trusted.packages:org.apache.servicecomb.pack.alpha.core.fsm.event,org.apache.servicecomb.pack.alpha.core.fsm.event.base,}org.apache.servicecomb.pack.alpha.core.fsm.event.internal")
-    private String trusted_packages;
-
-    @Value("${spring.kafka.producer.batch-size:16384}")
-    private int batchSize;
-
-    @Value("${spring.kafka.producer.retries:0}")
-    private int retries;
-
-    @Value("${spring.kafka.producer.buffer.memory:33554432}")
-    private long bufferMemory;
-
-    @Value("${spring.kafka.consumer.auto.offset.reset:earliest}")
-    private String autoOffsetReset;
-
-    @Value("${spring.kafka.consumer.enable.auto.commit:false}")
-    private boolean enableAutoCommit;
-
-    @Value("${spring.kafka.consumer.auto.commit.interval.ms:100}")
-    private int autoCommitIntervalMs;
-
-    @Value("${spring.kafka.listener.ackMode:MANUAL_IMMEDIATE}")
-    private String ackMode;
-
-    @Value("${spring.kafka.listener.pollTimeout:1500}")
-    private long poolTimeout;
-
-    @Value("${kafka.numPartitions:6}")
-    private int  numPartitions;
-
-    @Value("${kafka.replicationFactor:1}")
-    private short replicationFactor;
-
-    @Bean
-    @ConditionalOnMissingBean
-    public ProducerFactory<String, Object> producerFactory(){
-        Map<String, Object> map = Maps.newHashMap();
-        map.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrap_servers);
-        map.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
-        map.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
-        map.put(ProducerConfig.RETRIES_CONFIG, retries);
-        map.put(ProducerConfig.BATCH_SIZE_CONFIG, batchSize);
-        map.put(ProducerConfig.BUFFER_MEMORY_CONFIG, bufferMemory);
-
-        return new DefaultKafkaProducerFactory<>(map);
-    }
-
-    @Bean
-    @ConditionalOnMissingBean
-    public KafkaTemplate<String, Object> kafkaTemplate(){
-        return new KafkaTemplate<>(producerFactory());
-    }
-
-    @Bean
-    @ConditionalOnMissingBean
-    public ConsumerFactory<String, Object> consumerFactory(){
-        Map<String, Object> map = Maps.newHashMap();
-
-        map.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrap_servers);
-        map.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
-        map.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
-        map.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
-        map.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
-        map.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, enableAutoCommit);
-        map.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, autoCommitIntervalMs);
-        map.put(JsonDeserializer.TRUSTED_PACKAGES, trusted_packages);
-
-        if(logger.isDebugEnabled()){
-            logger.debug("init consumerFactory properties = [{}]", map);
-        }
-        return new DefaultKafkaConsumerFactory<>(map);
-    }
-
-    @Bean
-    @ConditionalOnMissingBean
-    public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, Object>> kafkaListenerContainerFactory(){
-        ConcurrentKafkaListenerContainerFactory<String,Object> concurrentKafkaListenerContainerFactory =
-                new ConcurrentKafkaListenerContainerFactory<>();
-        concurrentKafkaListenerContainerFactory.setConsumerFactory(consumerFactory());
-        concurrentKafkaListenerContainerFactory.getContainerProperties().setPollTimeout(poolTimeout);
-        concurrentKafkaListenerContainerFactory.getContainerProperties().setAckMode(ContainerProperties.AckMode.valueOf(ackMode));
-
-        return concurrentKafkaListenerContainerFactory;
-    }
-    @Bean
-    @ConditionalOnMissingBean
-    public KafkaMessagePublisher kafkaMessagePublisher(KafkaTemplate<String, Object> kafkaTemplate){
-        return new KafkaMessagePublisher(topic, kafkaTemplate);
-    }
-
-    @Bean
-    @ConditionalOnMissingBean
-    public KafkaMessageListener kafkaMessageListener(@Lazy @Qualifier("actorEventSink") ActorEventSink actorEventSink){
-        return new KafkaMessageListener(actorEventSink);
-    }
-
-    @Bean
-    @ConditionalOnMissingBean
-    public KafkaAdmin kafkaAdmin(){
-        Map<String, Object> map = Maps.newHashMap();
-
-        map.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrap_servers);
-
-        return new KafkaAdmin(map);
-    }
-
-    @Bean
-    @ConditionalOnMissingBean
-    public NewTopic newTopic(){
-        return new NewTopic(topic, numPartitions, replicationFactor);
-    }
-}
\ No newline at end of file
diff --git a/alpha/alpha-fsm-channel-kafka/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessageListener.java b/alpha/alpha-fsm-channel-kafka/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessageListener.java
deleted file mode 100644
index 8d1f880..0000000
--- a/alpha/alpha-fsm-channel-kafka/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessageListener.java
+++ /dev/null
@@ -1,49 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.servicecomb.pack.alpha.fsm.channel.kafka;
-
-import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.kafka.annotation.KafkaListener;
-import org.springframework.kafka.support.Acknowledgment;
-
-public class KafkaMessageListener {
-
-    private static final Logger logger = LoggerFactory.getLogger(KafkaMessageListener.class);
-
-    private ActorEventSink actorEventSink;
-
-    public KafkaMessageListener(ActorEventSink actorEventSink) {
-        this.actorEventSink = actorEventSink;
-    }
-
-    @KafkaListener(topics = "${alpha.feature.akka.channel.kafka.topic:servicecomb-pack-actor-event}")
-    public void listener(BaseEvent baseEvent, Acknowledgment acknowledgment){
-        if(logger.isDebugEnabled()){
-            logger.debug("listener event = [{}]", baseEvent);
-        }
-
-        try {
-            actorEventSink.send(baseEvent);
-            acknowledgment.acknowledge();
-        }catch (Exception e){
-            logger.error("subscriber Exception = [{}]", e.getMessage(), e);
-        }
-    }
-}
\ No newline at end of file
diff --git a/alpha/alpha-fsm-channel-kafka/src/main/resources/META-INF/spring.factories b/alpha/alpha-fsm-channel-kafka/src/main/resources/META-INF/spring.factories
deleted file mode 100644
index 9366e98..0000000
--- a/alpha/alpha-fsm-channel-kafka/src/main/resources/META-INF/spring.factories
+++ /dev/null
@@ -1,17 +0,0 @@
-## ---------------------------------------------------------------------------
-## Licensed to the Apache Software Foundation (ASF) under one or more
-## contributor license agreements.  See the NOTICE file distributed with
-## this work for additional information regarding copyright ownership.
-## The ASF licenses this file to You under the Apache License, Version 2.0
-## (the "License"); you may not use this file except in compliance with
-## the License.  You may obtain a copy of the License at
-##
-##      http://www.apache.org/licenses/LICENSE-2.0
-##
-## Unless required by applicable law or agreed to in writing, software
-## distributed under the License is distributed on an "AS IS" BASIS,
-## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-## See the License for the specific language governing permissions and
-## limitations under the License.
-## ---------------------------------------------------------------------------
-org.springframework.boot.autoconfigure.EnableAutoConfiguration=org.apache.servicecomb.pack.alpha.fsm.channel.kafka.KafkaChannelAutoConfiguration
diff --git a/alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaActorEventSink.java b/alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaActorEventSink.java
deleted file mode 100644
index b392a94..0000000
--- a/alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaActorEventSink.java
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.servicecomb.pack.alpha.fsm.channel.kafka.test;
-
-import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
-
-import java.util.concurrent.CountDownLatch;
-
-public class KafkaActorEventSink implements ActorEventSink {
-    public static final CountDownLatch countDownLatch = new CountDownLatch(8);
-
-    @Override
-    public void send(BaseEvent event) throws Exception {
-        countDownLatch.countDown();
-    }
-}
diff --git a/alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaApplication.java b/alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaApplication.java
deleted file mode 100644
index 9b001eb..0000000
--- a/alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaApplication.java
+++ /dev/null
@@ -1,40 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.servicecomb.pack.alpha.fsm.channel.kafka.test;
-
-import org.apache.servicecomb.pack.alpha.core.NodeStatus;
-import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
-import org.springframework.boot.SpringApplication;
-import org.springframework.boot.autoconfigure.SpringBootApplication;
-import org.springframework.context.annotation.Bean;
-
-@SpringBootApplication
-public class KafkaApplication {
-    public static void main(String[] args) {
-        SpringApplication.run(KafkaApplication.class, args);
-    }
-
-    @Bean(name = "actorEventSink")
-    public ActorEventSink actorEventSink(){
-        return new KafkaActorEventSink();
-    }
-
-    @Bean(name = "nodeStatus")
-    public NodeStatus nodeStatus(){
-        return new NodeStatus(NodeStatus.TypeEnum.MASTER);
-    }
-}
diff --git a/alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaChannelTest.java b/alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaChannelTest.java
deleted file mode 100644
index 942b16d..0000000
--- a/alpha/alpha-fsm-channel-kafka/src/test/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/test/KafkaChannelTest.java
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.servicecomb.pack.alpha.fsm.channel.kafka.test;
-
-import org.apache.servicecomb.pack.alpha.core.fsm.event.SagaEndedEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.SagaStartedEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.TxEndedEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.TxStartedEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
-import org.apache.servicecomb.pack.alpha.fsm.channel.kafka.KafkaMessagePublisher;
-import org.junit.Before;
-import org.junit.Test;
-import org.junit.runner.RunWith;
-import org.springframework.beans.factory.annotation.Autowired;
-import org.springframework.boot.test.context.SpringBootTest;
-import org.springframework.kafka.test.EmbeddedKafkaBroker;
-import org.springframework.kafka.test.context.EmbeddedKafka;
-import org.springframework.test.context.junit4.SpringRunner;
-
-import java.util.*;
-import java.util.concurrent.TimeUnit;
-
-import static org.junit.Assert.assertEquals;
-
-
-@RunWith(SpringRunner.class)
-@SpringBootTest(classes = KafkaApplication.class,
-        properties = {
-                "alpha.feature.akka.enabled=true",
-                "alpha.feature.akka.channel.type=kafka",
-                "spring.kafka.bootstrap-servers=${spring.embedded.kafka.brokers}",
-                "spring.kafka.consumer.group-id=messageListener"
-        }
-)
-@EmbeddedKafka
-public class KafkaChannelTest {
-    @Autowired
-    private EmbeddedKafkaBroker embeddedKafkaBroker;
-
-    @Autowired
-    private KafkaMessagePublisher kafkaMessagePublisher;
-
-    @Autowired
-    private KafkaActorEventSink actorEventSink;
-
-    @Before
-    public void setup(){
-    }
-    @Test
-    public void testProducer(){
-
-        String globalTxId = UUID.randomUUID().toString().replaceAll("-", "");
-        String localTxId_1 = UUID.randomUUID().toString().replaceAll("-", "");
-        String localTxId_2 = UUID.randomUUID().toString().replaceAll("-", "");
-        String localTxId_3 = UUID.randomUUID().toString().replaceAll("-", "");
-
-        buildData(globalTxId, localTxId_1, localTxId_2, localTxId_3).forEach(baseEvent -> kafkaMessagePublisher.publish(baseEvent));
-
-        try {
-            // Waiting for sub
-            TimeUnit.SECONDS.sleep(5);
-        } catch (InterruptedException e) {
-        }
-
-        assertEquals(0, actorEventSink.countDownLatch.getCount());
-
-    }
-
-    private List<BaseEvent> buildData(String globalTxId, String localTxId_1, String localTxId_2, String localTxId_3){
-        List<BaseEvent> sagaEvents = new ArrayList<>();
-        sagaEvents.add(SagaStartedEvent.builder().serviceName("service_g").instanceId("instance_g").globalTxId(globalTxId).build());
-        sagaEvents.add(TxStartedEvent.builder().serviceName("service_c1").instanceId("instance_c1").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_1).build());
-        sagaEvents.add(TxEndedEvent.builder().serviceName("service_c1").instanceId("instance_c1").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_1).build());
-        sagaEvents.add(TxStartedEvent.builder().serviceName("service_c2").instanceId("instance_c2").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_2).build());
-        sagaEvents.add(TxEndedEvent.builder().serviceName("service_c2").instanceId("instance_c2").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_2).build());
-        sagaEvents.add(TxStartedEvent.builder().serviceName("service_c3").instanceId("instance_c3").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_3).build());
-        sagaEvents.add(TxEndedEvent.builder().serviceName("service_c3").instanceId("instance_c3").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_3).build());
-        sagaEvents.add(SagaEndedEvent.builder().serviceName("service_g").instanceId("instance_g").globalTxId(globalTxId).build());
-        return sagaEvents;
-    }
-}
diff --git a/alpha/alpha-fsm-channel-kafka/src/test/resources/log4j2.xml b/alpha/alpha-fsm-channel-kafka/src/test/resources/log4j2.xml
deleted file mode 100644
index 8c2def9..0000000
--- a/alpha/alpha-fsm-channel-kafka/src/test/resources/log4j2.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one or more
-  ~ contributor license agreements.  See the NOTICE file distributed with
-  ~ this work for additional information regarding copyright ownership.
-  ~ The ASF licenses this file to You under the Apache License, Version 2.0
-  ~ (the "License"); you may not use this file except in compliance with
-  ~ the License.  You may obtain a copy of the License at
-  ~
-  ~      http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing, software
-  ~ distributed under the License is distributed on an "AS IS" BASIS,
-  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  ~ See the License for the specific language governing permissions and
-  ~ limitations under the License.
-  -->
-
-<Configuration status="WARN">
-  <Appenders>
-    <Console name="Console" target="SYSTEM_OUT">
-      <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
-    </Console>
-  </Appenders>
-  <Loggers>
-    <Root level="debug">
-      <AppenderRef ref="Console"/>
-    </Root>
-  </Loggers>
-</Configuration>
diff --git a/alpha/alpha-fsm-channel-redis/README.md b/alpha/alpha-fsm-channel-redis/README.md
deleted file mode 100644
index 98ecb93..0000000
--- a/alpha/alpha-fsm-channel-redis/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
-# FSM Redis channel
-## Enabled Saga State Machine Module
-
-Using `alpha.feature.akka.enabled=true` launch Alpha and Omega Side 
-Using `alpha.feature.akka.channel.type=redis` launch Alpha and Omega Side 
-
-```properties
-alpha.feature.akka.enabled=true
-alpha.feature.akka.channel.type=redis
-```
-
-setting spring boot redis
-```
-spring.redis.host=your_redis_host
-spring.redis.port=your_redis_port
-spring.redis.password=your_redis_password
-```
diff --git a/alpha/alpha-fsm-channel-redis/pom.xml b/alpha/alpha-fsm-channel-redis/pom.xml
deleted file mode 100644
index fdab9f2..0000000
--- a/alpha/alpha-fsm-channel-redis/pom.xml
+++ /dev/null
@@ -1,99 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one or more
-  ~ contributor license agreements.  See the NOTICE file distributed with
-  ~ this work for additional information regarding copyright ownership.
-  ~ The ASF licenses this file to You under the Apache License, Version 2.0
-  ~ (the "License"); you may not use this file except in compliance with
-  ~ the License.  You may obtain a copy of the License at
-  ~
-  ~      http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing, software
-  ~ distributed under the License is distributed on an "AS IS" BASIS,
-  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  ~ See the License for the specific language governing permissions and
-  ~ limitations under the License.
-  -->
-
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <parent>
-    <artifactId>alpha</artifactId>
-    <groupId>org.apache.servicecomb.pack</groupId>
-    <version>0.6.0-SNAPSHOT</version>
-  </parent>
-  <modelVersion>4.0.0</modelVersion>
-
-  <artifactId>alpha-fsm-channel-redis</artifactId>
-  <name>Pack::Alpha::Fsm::channel::redis</name>
-
-  <properties>
-  </properties>
-
-  <dependencyManagement>
-    <dependencies>
-      <dependency>
-        <groupId>org.springframework.boot</groupId>
-        <artifactId>spring-boot-dependencies</artifactId>
-        <version>${spring.boot.version}</version>
-        <type>pom</type>
-        <scope>import</scope>
-      </dependency>
-    </dependencies>
-  </dependencyManagement>
-
-  <dependencies>
-    <!-- spring boot -->
-    <dependency>
-      <groupId>org.springframework.boot</groupId>
-      <artifactId>spring-boot-autoconfigure</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.servicecomb.pack</groupId>
-      <artifactId>pack-common</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.servicecomb.pack</groupId>
-      <artifactId>alpha-core</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>com.google.guava</groupId>
-      <artifactId>guava</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.springframework.boot</groupId>
-      <artifactId>spring-boot-starter-data-redis</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.commons</groupId>
-      <artifactId>commons-pool2</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>org.springframework.boot</groupId>
-      <artifactId>spring-boot-starter-log4j2</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.logging.log4j</groupId>
-      <artifactId>log4j-api</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.logging.log4j</groupId>
-      <artifactId>log4j-core</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <!-- For testing the artifacts scope are test-->
-    <dependency>
-      <groupId>org.springframework.boot</groupId>
-      <artifactId>spring-boot-starter-test</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-    </dependency>
-  </dependencies>
-
-</project>
diff --git a/alpha/alpha-fsm-channel-redis/src/main/resources/META-INF/spring.factories b/alpha/alpha-fsm-channel-redis/src/main/resources/META-INF/spring.factories
deleted file mode 100644
index 0810009..0000000
--- a/alpha/alpha-fsm-channel-redis/src/main/resources/META-INF/spring.factories
+++ /dev/null
@@ -1,17 +0,0 @@
-## ---------------------------------------------------------------------------
-## Licensed to the Apache Software Foundation (ASF) under one or more
-## contributor license agreements.  See the NOTICE file distributed with
-## this work for additional information regarding copyright ownership.
-## The ASF licenses this file to You under the Apache License, Version 2.0
-## (the "License"); you may not use this file except in compliance with
-## the License.  You may obtain a copy of the License at
-##
-##      http://www.apache.org/licenses/LICENSE-2.0
-##
-## Unless required by applicable law or agreed to in writing, software
-## distributed under the License is distributed on an "AS IS" BASIS,
-## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-## See the License for the specific language governing permissions and
-## limitations under the License.
-## ---------------------------------------------------------------------------
-org.springframework.boot.autoconfigure.EnableAutoConfiguration=org.apache.servicecomb.pack.alpha.fsm.channel.redis.RedisChannelAutoConfiguration
diff --git a/alpha/alpha-fsm-channel-redis/src/test/java/org/apache/servicecomb/pack/alpha/fsm/RedisChannelTest.java b/alpha/alpha-fsm-channel-redis/src/test/java/org/apache/servicecomb/pack/alpha/fsm/RedisChannelTest.java
deleted file mode 100644
index 858919f..0000000
--- a/alpha/alpha-fsm-channel-redis/src/test/java/org/apache/servicecomb/pack/alpha/fsm/RedisChannelTest.java
+++ /dev/null
@@ -1,130 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.servicecomb.pack.alpha.fsm;
-
-import org.apache.servicecomb.pack.alpha.core.NodeStatus;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.SagaEndedEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.SagaStartedEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.TxEndedEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.TxStartedEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
-import org.apache.servicecomb.pack.alpha.fsm.channel.redis.MessageSerializer;
-import org.apache.servicecomb.pack.alpha.fsm.channel.redis.RedisMessagePublisher;
-import org.apache.servicecomb.pack.alpha.fsm.channel.redis.RedisMessageSubscriber;
-import org.junit.Before;
-import org.junit.Test;
-import org.junit.runner.RunWith;
-import org.mockito.Mock;
-import org.mockito.Spy;
-import org.mockito.junit.MockitoJUnitRunner;
-import org.springframework.data.redis.connection.DefaultMessage;
-import org.springframework.data.redis.connection.RedisConnection;
-import org.springframework.data.redis.connection.RedisConnectionFactory;
-import org.springframework.data.redis.core.RedisTemplate;
-import org.springframework.data.redis.listener.ChannelTopic;
-import org.springframework.data.redis.listener.RedisMessageListenerContainer;
-import org.springframework.data.redis.listener.adapter.MessageListenerAdapter;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.UUID;
-
-import static org.junit.Assert.assertEquals;
-import static org.mockito.Mockito.*;
-
-
-@RunWith(MockitoJUnitRunner.class)
-public class RedisChannelTest {
-
-  @Mock
-  private RedisConnection redisConnection;
-
-  @Mock
-  private RedisTemplate<String, Object> redisTemplate;
-
-  @Mock
-  private RedisConnectionFactory redisConnectionFactory;
-
-  @Spy
-  private ChannelTopic channelTopic = new ChannelTopic("redis-channel");
-
-  private RedisMessageListenerContainer redisMessageListenerContainer;
-
-  @Spy
-  private NodeStatus nodeStatus = new NodeStatus(NodeStatus.TypeEnum.MASTER);
-
-  @Spy
-  private RedisEventSink actorEventSink = new RedisEventSink();
-
-  private RedisMessagePublisher redisMessagePublisher;
-
-  private RedisMessageSubscriber redisMessageSubscriber;
-
-  private MessageListenerAdapter messageListenerAdapter;
-
-  @Before
-  public void setup(){
-    when(redisConnectionFactory.getConnection()).thenReturn(redisConnection);
-
-    redisTemplate.afterPropertiesSet();
-
-    redisMessageSubscriber = new RedisMessageSubscriber(actorEventSink, nodeStatus);
-    messageListenerAdapter = new MessageListenerAdapter(redisMessageSubscriber);
-    messageListenerAdapter.afterPropertiesSet();
-
-    redisMessageListenerContainer = new RedisMessageListenerContainer();
-    redisMessageListenerContainer.setConnectionFactory(redisConnectionFactory);
-    redisMessageListenerContainer.addMessageListener(messageListenerAdapter, channelTopic);
-    redisMessageListenerContainer.afterPropertiesSet();
-    redisMessageListenerContainer.start();
-
-    redisMessagePublisher = new RedisMessagePublisher(redisTemplate, channelTopic);
-
-  }
-
-
-  @Test
-  public void testRedisPubSub(){
-    final String globalTxId = UUID.randomUUID().toString().replaceAll("-", "");
-    final String localTxId1 = UUID.randomUUID().toString().replaceAll("-", "");
-    final String localTxId2 = UUID.randomUUID().toString().replaceAll("-", "");
-    final String localTxId3 = UUID.randomUUID().toString().replaceAll("-", "");
-
-    MessageSerializer messageSerializer = new MessageSerializer();
-    buildData(globalTxId, localTxId1, localTxId2, localTxId3).forEach(baseEvent -> {
-      redisMessagePublisher.publish(baseEvent);
-      redisMessageSubscriber.onMessage(new DefaultMessage(channelTopic.getTopic().getBytes(), messageSerializer.serializer(baseEvent).orElse(new byte[0])), channelTopic.getTopic().getBytes());
-    });
-
-    assertEquals(0, actorEventSink.countDownLatch.getCount());
-  }
-
-  private List<BaseEvent> buildData(String globalTxId, String localTxId_1, String localTxId_2, String localTxId_3){
-    List<BaseEvent> sagaEvents = new ArrayList<>();
-    sagaEvents.add(SagaStartedEvent.builder().serviceName("service_g").instanceId("instance_g").globalTxId(globalTxId).build());
-    sagaEvents.add(TxStartedEvent.builder().serviceName("service_c1").instanceId("instance_c1").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_1).build());
-    sagaEvents.add(TxEndedEvent.builder().serviceName("service_c1").instanceId("instance_c1").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_1).build());
-    sagaEvents.add(TxStartedEvent.builder().serviceName("service_c2").instanceId("instance_c2").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_2).build());
-    sagaEvents.add(TxEndedEvent.builder().serviceName("service_c2").instanceId("instance_c2").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_2).build());
-    sagaEvents.add(TxStartedEvent.builder().serviceName("service_c3").instanceId("instance_c3").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_3).build());
-    sagaEvents.add(TxEndedEvent.builder().serviceName("service_c3").instanceId("instance_c3").globalTxId(globalTxId).parentTxId(globalTxId).localTxId(localTxId_3).build());
-    sagaEvents.add(SagaEndedEvent.builder().serviceName("service_g").instanceId("instance_g").globalTxId(globalTxId).build());
-    return sagaEvents;
-  }
-}
-
-
diff --git a/alpha/alpha-fsm-channel-redis/src/test/java/org/apache/servicecomb/pack/alpha/fsm/RedisEventSink.java b/alpha/alpha-fsm-channel-redis/src/test/java/org/apache/servicecomb/pack/alpha/fsm/RedisEventSink.java
deleted file mode 100644
index f44342e..0000000
--- a/alpha/alpha-fsm-channel-redis/src/test/java/org/apache/servicecomb/pack/alpha/fsm/RedisEventSink.java
+++ /dev/null
@@ -1,32 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.servicecomb.pack.alpha.fsm;
-
-import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
-
-import java.util.concurrent.CountDownLatch;
-
-public class RedisEventSink implements ActorEventSink {
-
-  public static final CountDownLatch countDownLatch = new CountDownLatch(8);
-
-  @Override
-  public void send(BaseEvent event) throws Exception {
-    countDownLatch.countDown();
-  }
-}
diff --git a/alpha/alpha-fsm-channel-redis/src/test/resources/log4j2.xml b/alpha/alpha-fsm-channel-redis/src/test/resources/log4j2.xml
deleted file mode 100644
index 58924c6..0000000
--- a/alpha/alpha-fsm-channel-redis/src/test/resources/log4j2.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one or more
-  ~ contributor license agreements.  See the NOTICE file distributed with
-  ~ this work for additional information regarding copyright ownership.
-  ~ The ASF licenses this file to You under the Apache License, Version 2.0
-  ~ (the "License"); you may not use this file except in compliance with
-  ~ the License.  You may obtain a copy of the License at
-  ~
-  ~      http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing, software
-  ~ distributed under the License is distributed on an "AS IS" BASIS,
-  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  ~ See the License for the specific language governing permissions and
-  ~ limitations under the License.
-  -->
-
-<Configuration status="WARN">
-  <Appenders>
-    <Console name="Console" target="SYSTEM_OUT">
-      <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
-    </Console>
-  </Appenders>
-  <Loggers>
-    <Root level="info">
-      <AppenderRef ref="Console"/>
-    </Root>
-  </Loggers>
-</Configuration>
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java
index 4d861f6..777ae3f 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java
@@ -20,53 +20,47 @@ package org.apache.servicecomb.pack.alpha.fsm;
 import static org.apache.servicecomb.pack.alpha.fsm.spring.integration.akka.SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER;
 import static org.apache.servicecomb.pack.alpha.fsm.spring.integration.akka.SpringAkkaExtension.SPRING_EXTENSION_PROVIDER;
 
+import akka.actor.ActorRef;
 import akka.actor.ActorSystem;
+import akka.actor.Props;
 import com.typesafe.config.Config;
 import com.typesafe.config.ConfigFactory;
 import java.util.Map;
 import javax.annotation.PostConstruct;
-import org.apache.servicecomb.pack.alpha.fsm.channel.ActiveMQActorEventChannel;
-import org.apache.servicecomb.pack.alpha.fsm.channel.kafka.KafkaMessagePublisher;
-import org.apache.servicecomb.pack.alpha.fsm.channel.redis.RedisMessagePublisher;
+import org.apache.servicecomb.pack.alpha.fsm.channel.kafka.KafkaChannelAutoConfiguration;
+import org.apache.servicecomb.pack.alpha.fsm.channel.memory.MemoryChannelAutoConfiguration;
+import org.apache.servicecomb.pack.alpha.fsm.channel.redis.RedisChannelAutoConfiguration;
 import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
 import org.apache.servicecomb.pack.alpha.fsm.repository.NoneTransactionRepository;
+import org.apache.servicecomb.pack.alpha.fsm.repository.channel.DefaultTransactionRepositoryChannel;
 import org.apache.servicecomb.pack.alpha.fsm.repository.elasticsearch.ElasticsearchTransactionRepository;
 import org.apache.servicecomb.pack.alpha.fsm.repository.TransactionRepository;
-import org.apache.servicecomb.pack.alpha.fsm.repository.channel.MemoryTransactionRepositoryChannel;
 import org.apache.servicecomb.pack.alpha.fsm.repository.TransactionRepositoryChannel;
-import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
-import org.apache.servicecomb.pack.alpha.core.fsm.channel.ActorEventChannel;
-import org.apache.servicecomb.pack.alpha.fsm.channel.KafkaActorEventChannel;
-import org.apache.servicecomb.pack.alpha.fsm.channel.MemoryActorEventChannel;
-import org.apache.servicecomb.pack.alpha.fsm.channel.RedisActorEventChannel;
-import org.apache.servicecomb.pack.alpha.fsm.sink.SagaActorEventSender;
 import org.apache.servicecomb.pack.alpha.fsm.spring.integration.akka.AkkaConfigPropertyAdapter;
 import org.springframework.beans.factory.annotation.Value;
+import org.springframework.boot.autoconfigure.ImportAutoConfiguration;
 import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
 import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
 import org.springframework.context.ConfigurableApplicationContext;
 import org.springframework.context.annotation.Bean;
 import org.springframework.context.annotation.Configuration;
-import org.springframework.context.annotation.Lazy;
 import org.springframework.core.env.ConfigurableEnvironment;
 import org.springframework.data.elasticsearch.core.ElasticsearchTemplate;
 
 @Configuration
+@ImportAutoConfiguration({
+    MemoryChannelAutoConfiguration.class,
+    KafkaChannelAutoConfiguration.class,
+    RedisChannelAutoConfiguration.class})
 @ConditionalOnProperty(value = {"alpha.feature.akka.enabled"})
 public class FsmAutoConfiguration {
 
-  @Value("${alpha.feature.akka.channel.memory.size:-1}")
-  int memoryEventChannelMemorySize;
-
   @Value("${alpha.feature.akka.transaction.repository.elasticsearch.batchSize:1000}")
   int repositoryElasticsearchBatchSize;
 
   @Value("${alpha.feature.akka.transaction.repository.elasticsearch.refreshTime:5000}")
   int repositoryElasticsearchRefreshTime;
 
-  @Value("${alpha.feature.akka.transaction.repository.elasticsearch.memory.size:-1}")
-  int memoryTransactionRepositoryChannelSize;
-
   @PostConstruct
   void init() {
     System.setProperty("es.set.netty.runtime.available.processors", "false");
@@ -77,7 +71,8 @@ public class FsmAutoConfiguration {
       ConfigurableEnvironment environment, MetricsService metricsService,
       TransactionRepositoryChannel repositoryChannel) {
     ActorSystem system = ActorSystem
-        .create("alpha-akka", akkaConfiguration(applicationContext, environment));
+        .create("alpha-cluster", akkaConfiguration(applicationContext, environment));
+
     SPRING_EXTENSION_PROVIDER.get(system).initialize(applicationContext);
     SAGA_DATA_EXTENSION_PROVIDER.get(system).setRepositoryChannel(repositoryChannel);
     SAGA_DATA_EXTENSION_PROVIDER.get(system).setMetricsService(metricsService);
@@ -97,40 +92,9 @@ public class FsmAutoConfiguration {
     return new MetricsService();
   }
 
-  @Bean
-  public ActorEventSink actorEventSink(MetricsService metricsService) {
-    return new SagaActorEventSender(metricsService);
-  }
-
-  @Bean
-  @ConditionalOnProperty(value = "alpha.feature.akka.channel.type", havingValue = "memory")
-  @ConditionalOnMissingBean(ActorEventChannel.class)
-  public ActorEventChannel memoryEventChannel(ActorEventSink actorEventSink,
-      MetricsService metricsService) {
-    return new MemoryActorEventChannel(actorEventSink, memoryEventChannelMemorySize,
-        metricsService);
-  }
-
-  @Bean
-  @ConditionalOnProperty(value = "alpha.feature.akka.channel.type", havingValue = "activemq")
-  @ConditionalOnMissingBean(ActorEventChannel.class)
-  public ActorEventChannel activeMqEventChannel(ActorEventSink actorEventSink,
-      MetricsService metricsService) {
-    return new ActiveMQActorEventChannel(actorEventSink, metricsService);
-  }
-
-  @Bean
-  @ConditionalOnProperty(value = "alpha.feature.akka.channel.type", havingValue = "kafka")
-  @ConditionalOnMissingBean(ActorEventChannel.class)
-  public ActorEventChannel kafkaEventChannel(ActorEventSink actorEventSink,
-      MetricsService metricsService, @Lazy KafkaMessagePublisher kafkaMessagePublisher){
-    return new KafkaActorEventChannel(actorEventSink, metricsService, kafkaMessagePublisher);
-  }
-
-  @Bean
-  @ConditionalOnProperty(value = "alpha.feature.akka.channel.type", havingValue = "redis")
-  public ActorEventChannel redisEventChannel(ActorEventSink actorEventSink, MetricsService metricsService, @Lazy RedisMessagePublisher redisMessagePublisher){
-    return new RedisActorEventChannel(actorEventSink, metricsService, redisMessagePublisher);
+  @Bean(name = "sagaShardRegionActor")
+  public ActorRef sagaShardRegionActor(ActorSystem actorSystem) {
+    return actorSystem.actorOf(Props.create(SagaShardRegionActor.class));
   }
 
   @Bean
@@ -148,12 +112,9 @@ public class FsmAutoConfiguration {
   }
 
   @Bean
-  @ConditionalOnMissingBean(TransactionRepositoryChannel.class)
-  @ConditionalOnProperty(value = "alpha.feature.akka.transaction.repository.channel.type", havingValue = "memory", matchIfMissing = true)
   TransactionRepositoryChannel memoryTransactionRepositoryChannel(TransactionRepository repository,
       MetricsService metricsService) {
-    return new MemoryTransactionRepositoryChannel(repository, memoryTransactionRepositoryChannelSize,
-        metricsService);
+    return new DefaultTransactionRepositoryChannel(repository, metricsService);
   }
 
 }
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/AbstractActorEventChannel.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/AbstractActorEventChannel.java
index 40f1ee7..61bbde5 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/AbstractActorEventChannel.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/AbstractActorEventChannel.java
@@ -28,14 +28,11 @@ public abstract class AbstractActorEventChannel implements ActorEventChannel {
   private static final Logger logger = LoggerFactory.getLogger(AbstractActorEventChannel.class);
 
   protected final MetricsService metricsService;
-  protected final ActorEventSink actorEventSink;
 
   public abstract void sendTo(BaseEvent event);
 
   public AbstractActorEventChannel(
-      ActorEventSink actorEventSink,
       MetricsService metricsService) {
-    this.actorEventSink = actorEventSink;
     this.metricsService = metricsService;
   }
 
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/AbstractEventConsumer.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/AbstractEventConsumer.java
new file mode 100644
index 0000000..6869add
--- /dev/null
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/AbstractEventConsumer.java
@@ -0,0 +1,20 @@
+package org.apache.servicecomb.pack.alpha.fsm.channel;
+
+import akka.actor.ActorRef;
+import akka.actor.ActorSystem;
+import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
+
+public abstract class AbstractEventConsumer {
+
+  protected final MetricsService metricsService;
+  protected final ActorSystem actorSystem;
+  protected final ActorRef sagaShardRegionActor;
+
+  public AbstractEventConsumer(
+      ActorSystem actorSystem,
+      ActorRef sagaShardRegionActor, MetricsService metricsService) {
+    this.metricsService = metricsService;
+    this.actorSystem = actorSystem;
+    this.sagaShardRegionActor = sagaShardRegionActor;
+  }
+}
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/ActiveMQActorEventChannel.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/ActiveMQActorEventChannel.java
deleted file mode 100644
index 3fdc19b..0000000
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/ActiveMQActorEventChannel.java
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.servicecomb.pack.alpha.fsm.channel;
-
-import java.lang.invoke.MethodHandles;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
-import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
-import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * Queue
- * */
-
-public class ActiveMQActorEventChannel extends AbstractActorEventChannel {
-  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  public ActiveMQActorEventChannel(
-      ActorEventSink actorEventSink, MetricsService metricsService) {
-    super(actorEventSink, metricsService);
-  }
-
-  @Override
-  public void sendTo(BaseEvent event){
-    throw new UnsupportedOperationException("Doesn't implement yet!");
-  }
-}
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/KafkaActorEventChannel.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaActorEventChannel.java
similarity index 67%
rename from alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/KafkaActorEventChannel.java
rename to alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaActorEventChannel.java
index aca2676..6e3cfe7 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/KafkaActorEventChannel.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaActorEventChannel.java
@@ -15,24 +15,18 @@
  * limitations under the License.
  */
 
-package org.apache.servicecomb.pack.alpha.fsm.channel;
+package org.apache.servicecomb.pack.alpha.fsm.channel.kafka;
 
-import java.lang.invoke.MethodHandles;
 import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
-import org.apache.servicecomb.pack.alpha.fsm.channel.kafka.KafkaMessagePublisher;
+import org.apache.servicecomb.pack.alpha.fsm.channel.AbstractActorEventChannel;
 import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
-import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 public class KafkaActorEventChannel extends AbstractActorEventChannel {
-  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
   private KafkaMessagePublisher kafkaMessagePublisher;
 
-  public KafkaActorEventChannel(
-      ActorEventSink actorEventSink, MetricsService metricsService, KafkaMessagePublisher kafkaMessagePublisher) {
-    super(actorEventSink, metricsService);
+  public KafkaActorEventChannel(MetricsService metricsService, KafkaMessagePublisher kafkaMessagePublisher) {
+    super(metricsService);
     this.kafkaMessagePublisher = kafkaMessagePublisher;
   }
 
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java
new file mode 100644
index 0000000..ad44641
--- /dev/null
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.servicecomb.pack.alpha.fsm.channel.kafka;
+
+import akka.actor.ActorRef;
+import akka.actor.ActorSystem;
+import com.google.common.collect.Maps;
+import java.lang.invoke.MethodHandles;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.ExecutionException;
+import javax.annotation.PostConstruct;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.CreateTopicsResult;
+import org.apache.kafka.clients.admin.KafkaAdminClient;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.errors.TopicExistsException;
+import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.servicecomb.pack.alpha.core.fsm.channel.ActorEventChannel;
+import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.springframework.beans.factory.annotation.Qualifier;
+import org.springframework.beans.factory.annotation.Value;
+import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
+import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
+import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
+import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
+import org.springframework.context.annotation.Bean;
+import org.springframework.context.annotation.Configuration;
+import org.springframework.context.annotation.Lazy;
+import org.springframework.kafka.core.DefaultKafkaProducerFactory;
+import org.springframework.kafka.core.KafkaTemplate;
+import org.springframework.kafka.support.serializer.JsonSerializer;
+
+@Configuration
+@ConditionalOnClass(KafkaProperties.class)
+@ConditionalOnProperty(value = "alpha.feature.akka.channel.type", havingValue = "kafka")
+public class KafkaChannelAutoConfiguration {
+
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  @Value("${alpha.feature.akka.channel.kafka.topic:servicecomb-pack-actor-event}")
+  private String topic;
+
+  @Value("${spring.kafka.bootstrap-servers}")
+  private String bootstrap_servers;
+
+  @Value("${spring.kafka.consumer.group-id:servicecomb-pack}")
+  private String groupId;
+
+  @Value("${spring.kafka.consumer.properties.spring.json.trusted.packages:org.apache.servicecomb.pack.alpha.core.fsm.event,org.apache.servicecomb.pack.alpha.core.fsm.event.base,}org.apache.servicecomb.pack.alpha.core.fsm.event.internal")
+  private String trusted_packages;
+
+  @Value("${spring.kafka.producer.batch-size:16384}")
+  private int batchSize;
+
+  @Value("${spring.kafka.producer.retries:0}")
+  private int retries;
+
+  @Value("${spring.kafka.producer.buffer.memory:33554432}")
+  private long bufferMemory;
+
+  @Value("${spring.kafka.consumer.auto.offset.reset:earliest}")
+  private String autoOffsetReset;
+
+  @Value("${spring.kafka.consumer.enable.auto.commit:false}")
+  private boolean enableAutoCommit;
+
+  @Value("${spring.kafka.consumer.auto.commit.interval.ms:100}")
+  private int autoCommitIntervalMs;
+
+  @Value("${spring.kafka.listener.ackMode:MANUAL_IMMEDIATE}")
+  private String ackMode;
+
+  @Value("${spring.kafka.listener.pollTimeout:1500}")
+  private long poolTimeout;
+
+  @Value("${kafka.numPartitions:6}")
+  private int numPartitions;
+
+  @Value("${kafka.replicationFactor:1}")
+  private short replicationFactor;
+
+  @PostConstruct
+  public void init() {
+    Map props = new HashMap<>();
+    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrap_servers);
+    props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 50000);
+    try (final AdminClient adminClient = KafkaAdminClient.create(props)) {
+      try {
+        final NewTopic newTopic = new NewTopic(topic, numPartitions, replicationFactor);
+        final CreateTopicsResult createTopicsResult = adminClient
+            .createTopics(Collections.singleton(newTopic));
+        createTopicsResult.values().get(topic).get();
+      } catch (InterruptedException | ExecutionException e) {
+        if (!(e.getCause() instanceof TopicExistsException)) {
+          throw new RuntimeException(e.getMessage(), e);
+        }
+      }
+    }
+    LOG.info("Kafka Channel Init");
+  }
+
+  @Bean
+  @ConditionalOnMissingBean
+  public KafkaMessagePublisher kafkaMessagePublisher() {
+    Map<String, Object> map = Maps.newHashMap();
+    map.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrap_servers);
+    map.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
+    map.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
+    map.put(ProducerConfig.RETRIES_CONFIG, retries);
+    map.put(ProducerConfig.BATCH_SIZE_CONFIG, batchSize);
+    map.put(ProducerConfig.BUFFER_MEMORY_CONFIG, bufferMemory);
+    return new KafkaMessagePublisher(topic,
+        new KafkaTemplate<>(new DefaultKafkaProducerFactory<>(map)));
+  }
+
+  @Bean
+  @ConditionalOnMissingBean(ActorEventChannel.class)
+  public ActorEventChannel kafkaEventChannel(MetricsService metricsService,
+      @Lazy KafkaMessagePublisher kafkaMessagePublisher) {
+    return new KafkaActorEventChannel(metricsService, kafkaMessagePublisher);
+  }
+
+  @Bean
+  KafkaSagaEventConsumer sagaEventKafkaConsumer(ActorSystem actorSystem,
+      @Qualifier("sagaShardRegionActor") ActorRef sagaShardRegionActor,
+      MetricsService metricsService) {
+    return new KafkaSagaEventConsumer(actorSystem, sagaShardRegionActor, metricsService,
+        bootstrap_servers, topic);
+  }
+}
\ No newline at end of file
diff --git a/alpha/alpha-fsm-channel-kafka/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java
similarity index 82%
rename from alpha/alpha-fsm-channel-kafka/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java
rename to alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java
index ba96b56..95de39b 100644
--- a/alpha/alpha-fsm-channel-kafka/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java
@@ -17,15 +17,14 @@
 
 package org.apache.servicecomb.pack.alpha.fsm.channel.kafka;
 
+import java.util.concurrent.ExecutionException;
 import org.apache.servicecomb.pack.alpha.core.fsm.channel.MessagePublisher;
 import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.springframework.kafka.core.KafkaTemplate;
 
-import java.util.concurrent.ExecutionException;
-
-public class KafkaMessagePublisher implements MessagePublisher {
+public class KafkaMessagePublisher implements MessagePublisher<BaseEvent> {
 
     private static final Logger logger = LoggerFactory.getLogger(KafkaMessagePublisher.class);
 
@@ -38,18 +37,13 @@ public class KafkaMessagePublisher implements MessagePublisher {
     }
 
     @Override
-    public void publish(Object data) {
+    public void publish(BaseEvent data) {
         if(logger.isDebugEnabled()){
             logger.debug("send message [{}] to [{}]", data, topic);
         }
 
         try {
-            if(data instanceof BaseEvent) {
-                BaseEvent event = (BaseEvent) data;
-                kafkaTemplate.send(topic, event.getGlobalTxId(), event).get();
-            }else{
-                throw new UnsupportedOperationException("data must be BaseEvent type");
-            }
+            kafkaTemplate.send(topic, data.getGlobalTxId(), data).get();
         } catch (InterruptedException | ExecutionException | UnsupportedOperationException e) {
             logger.error("publish Exception = [{}]", e.getMessage(), e);
             throw new RuntimeException(e);
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java
new file mode 100644
index 0000000..b816302
--- /dev/null
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.servicecomb.pack.alpha.fsm.channel.kafka;
+
+import akka.actor.ActorRef;
+import akka.actor.ActorSystem;
+import akka.kafka.CommitterSettings;
+import akka.kafka.ConsumerSettings;
+import akka.kafka.Subscriptions;
+import akka.kafka.javadsl.Committer;
+import akka.kafka.javadsl.Consumer;
+import akka.stream.ActorMaterializer;
+import akka.stream.Materializer;
+import akka.stream.javadsl.Keep;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.typesafe.config.Config;
+import java.lang.invoke.MethodHandles;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.CompletionException;
+import java.util.concurrent.CompletionStage;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
+import org.apache.servicecomb.pack.alpha.fsm.channel.AbstractEventConsumer;
+import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class KafkaSagaEventConsumer extends AbstractEventConsumer {
+
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+  final String groupId = "servicecomb-pack";
+  final ObjectMapper jsonMapper = new ObjectMapper();
+
+  public KafkaSagaEventConsumer(ActorSystem actorSystem, ActorRef sagaShardRegionActor,
+      MetricsService metricsService, String bootstrap_servers, String topic) {
+    super(actorSystem, sagaShardRegionActor, metricsService);
+
+    // init consumer
+    final Materializer materializer = ActorMaterializer.create(actorSystem);
+    final Config consumerConfig = actorSystem.settings().config().getConfig("akka.kafka.consumer");
+    final ConsumerSettings<String, String> consumerSettings =
+        ConsumerSettings
+            .create(consumerConfig, new StringDeserializer(), new StringDeserializer())
+            .withBootstrapServers(bootstrap_servers)
+            .withGroupId(groupId)
+            .withProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false")
+            .withProperty(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "5000")
+            .withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
+            .withProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "StringDeserializer.class")
+            .withProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
+                "StringDeserializer.class");
+    CommitterSettings committerSettings = CommitterSettings.create(consumerConfig);
+    Consumer.committableSource(consumerSettings, Subscriptions.topics(topic))
+        .mapAsync(1, event -> { // must be set to 1 for ordered
+          return sendSagaActor(event.record().key(), event.record().value())
+              .thenApply(done -> event.committableOffset());
+        })
+        .toMat(Committer.sink(committerSettings), Keep.both())
+        .mapMaterializedValue(Consumer::createDrainingControl)
+        .run(materializer);
+  }
+
+  private CompletionStage<String> sendSagaActor(String key, String value) {
+    try {
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("key {}, value {}", key, value);
+      }
+      long begin = System.currentTimeMillis();
+      metricsService.metrics().doActorReceived();
+      sagaShardRegionActor.tell(jsonMapper.readValue(value, BaseEvent.class), sagaShardRegionActor);
+      long end = System.currentTimeMillis();
+      metricsService.metrics().doActorAccepted();
+      metricsService.metrics().doActorAvgTime(end - begin);
+      return CompletableFuture.completedFuture("");
+    } catch (Exception ex) {
+      metricsService.metrics().doActorRejected();
+      LOG.error("key {}, value {}", key, value);
+      throw new CompletionException(ex);
+    }
+  }
+}
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/MemoryActorEventChannel.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/memory/MemoryActorEventChannel.java
similarity index 70%
rename from alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/MemoryActorEventChannel.java
rename to alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/memory/MemoryActorEventChannel.java
index b56fe38..d5222b5 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/MemoryActorEventChannel.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/memory/MemoryActorEventChannel.java
@@ -15,12 +15,13 @@
  * limitations under the License.
  */
 
-package org.apache.servicecomb.pack.alpha.fsm.channel;
+package org.apache.servicecomb.pack.alpha.fsm.channel.memory;
 
 import java.lang.invoke.MethodHandles;
 import java.util.concurrent.LinkedBlockingQueue;
 
 import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
+import org.apache.servicecomb.pack.alpha.fsm.channel.AbstractActorEventChannel;
 import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
 import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
 import org.slf4j.Logger;
@@ -32,12 +33,14 @@ public class MemoryActorEventChannel extends AbstractActorEventChannel {
   private final LinkedBlockingQueue<BaseEvent> eventQueue;
   private int size;
 
-  public MemoryActorEventChannel(ActorEventSink actorEventSink, int size,
-      MetricsService metricsService) {
-    super(actorEventSink, metricsService);
+  public MemoryActorEventChannel(MetricsService metricsService, int size) {
+    super(metricsService);
     this.size = size > 0 ? size : Integer.MAX_VALUE;
     eventQueue = new LinkedBlockingQueue(this.size);
-    new Thread(new EventConsumer(), "MemoryActorEventChannel").start();
+  }
+
+  public LinkedBlockingQueue<BaseEvent> getEventQueue() {
+    return eventQueue;
   }
 
   @Override
@@ -48,24 +51,4 @@ public class MemoryActorEventChannel extends AbstractActorEventChannel {
       throw new RuntimeException(e);
     }
   }
-
-  class EventConsumer implements Runnable {
-
-    @Override
-    public void run() {
-      while (true) {
-        try {
-          BaseEvent event = eventQueue.peek();
-          if (event != null) {
-            actorEventSink.send(event);
-            eventQueue.poll();
-          } else {
-            Thread.sleep(10);
-          }
-        } catch (Exception ex) {
-          LOG.error(ex.getMessage(), ex);
-        }
-      }
-    }
-  }
 }
diff --git a/alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/MessageSerializer.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/MessageSerializer.java
similarity index 84%
rename from alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/MessageSerializer.java
rename to alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/MessageSerializer.java
index dc2ef16..5665eee 100644
--- a/alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/MessageSerializer.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/MessageSerializer.java
@@ -16,20 +16,20 @@
  */
 package org.apache.servicecomb.pack.alpha.fsm.channel.redis;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.data.redis.serializer.RedisSerializer;
-import org.springframework.data.redis.serializer.SerializationException;
-
 import java.io.ByteArrayInputStream;
 import java.io.ByteArrayOutputStream;
 import java.io.ObjectInputStream;
 import java.io.ObjectOutputStream;
+import java.lang.invoke.MethodHandles;
 import java.util.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.springframework.data.redis.serializer.RedisSerializer;
+import org.springframework.data.redis.serializer.SerializationException;
 
 public class MessageSerializer {
 
-  private static final Logger logger = LoggerFactory.getLogger(MessageSerializer.class);
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
   private static MessageSerializerImpl serializer = null;
 
@@ -37,15 +37,16 @@ public class MessageSerializer {
     serializer = new MessageSerializerImpl();
   }
 
-  public Optional<byte[]> serializer(Object data){
+  public Optional<byte[]> serializer(Object data) {
     return Optional.ofNullable(serializer.serialize(data));
   }
 
-  public Optional<Object> deserialize(byte[] bytes){
+  public Optional<Object> deserialize(byte[] bytes) {
     return Optional.ofNullable(serializer.deserialize(bytes));
   }
 
-  private class MessageSerializerImpl implements RedisSerializer<Object>{
+  private class MessageSerializerImpl implements RedisSerializer<Object> {
+
     @Override
     public byte[] serialize(Object data) throws SerializationException {
       try {
@@ -58,8 +59,8 @@ public class MessageSerializer {
         outputStream.close();
 
         return bytes;
-      }catch (Exception e){
-        logger.error("serialize Exception = [{}]", e.getMessage(), e);
+      } catch (Exception e) {
+        LOG.error("serialize Exception = [{}]", e.getMessage(), e);
       }
 
       return null;
@@ -76,8 +77,8 @@ public class MessageSerializer {
         objectInputStream.close();
 
         return object;
-      }catch (Exception e){
-        logger.error("deserialize Exception = [{}]", e.getMessage(), e);
+      } catch (Exception e) {
+        LOG.error("deserialize Exception = [{}]", e.getMessage(), e);
       }
 
       return null;
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/RedisActorEventChannel.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisActorEventChannel.java
similarity index 75%
rename from alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/RedisActorEventChannel.java
rename to alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisActorEventChannel.java
index f68d7c0..20abdec 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/RedisActorEventChannel.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisActorEventChannel.java
@@ -15,35 +15,35 @@
  * limitations under the License.
  */
 
-package org.apache.servicecomb.pack.alpha.fsm.channel;
+package org.apache.servicecomb.pack.alpha.fsm.channel.redis;
 
 import java.lang.invoke.MethodHandles;
 
-import org.apache.servicecomb.pack.alpha.fsm.channel.redis.RedisMessagePublisher;
+import org.apache.servicecomb.pack.alpha.fsm.channel.AbstractActorEventChannel;
 import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
 import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
-import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 /**
  * Pub/Sub
- * */
+ */
 
 public class RedisActorEventChannel extends AbstractActorEventChannel {
+
   private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
   private RedisMessagePublisher redisMessagePublisher;
 
-  public RedisActorEventChannel(
-      ActorEventSink actorEventSink, MetricsService metricsService, RedisMessagePublisher redisMessagePublisher) {
-    super(actorEventSink, metricsService);
+  public RedisActorEventChannel(MetricsService metricsService,
+      RedisMessagePublisher redisMessagePublisher) {
+    super(metricsService);
     this.redisMessagePublisher = redisMessagePublisher;
   }
 
   @Override
-  public void sendTo(BaseEvent event){
-    if(LOG.isDebugEnabled()) {
+  public void sendTo(BaseEvent event) {
+    if (LOG.isDebugEnabled()) {
       LOG.debug("sendTo message = [{}]", event);
     }
     redisMessagePublisher.publish(event);
diff --git a/alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisChannelAutoConfiguration.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisChannelAutoConfiguration.java
similarity index 69%
rename from alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisChannelAutoConfiguration.java
rename to alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisChannelAutoConfiguration.java
index e67e8d5..3bc8cb4 100644
--- a/alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisChannelAutoConfiguration.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisChannelAutoConfiguration.java
@@ -16,9 +16,15 @@
  */
 package org.apache.servicecomb.pack.alpha.fsm.channel.redis;
 
+import akka.actor.ActorRef;
+import akka.actor.ActorSystem;
+import java.lang.invoke.MethodHandles;
+import javax.annotation.PostConstruct;
 import org.apache.servicecomb.pack.alpha.core.NodeStatus;
+import org.apache.servicecomb.pack.alpha.core.fsm.channel.ActorEventChannel;
 import org.apache.servicecomb.pack.alpha.core.fsm.channel.MessagePublisher;
 import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
+import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.springframework.beans.factory.annotation.Qualifier;
@@ -42,13 +48,20 @@ import org.springframework.data.redis.serializer.StringRedisSerializer;
 @ConditionalOnClass(RedisConnection.class)
 @ConditionalOnProperty(value = "alpha.feature.akka.channel.type", havingValue = "redis")
 public class RedisChannelAutoConfiguration {
-  private static final Logger logger = LoggerFactory.getLogger(RedisChannelAutoConfiguration.class);
+
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
   @Value("${alpha.feature.akka.channel.redis.topic:servicecomb-pack-actor-event}")
   private String topic;
 
+  @PostConstruct
+  public void init() {
+    LOG.info("Redis Channel Init");
+  }
+
   @Bean
-  public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
+  public RedisTemplate<String, Object> redisTemplate(
+      RedisConnectionFactory redisConnectionFactory) {
     RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
     redisTemplate.setKeySerializer(new StringRedisSerializer());
     redisTemplate.setHashKeySerializer(new GenericToStringSerializer<>(Object.class));
@@ -60,25 +73,29 @@ public class RedisChannelAutoConfiguration {
   }
 
   @Bean
-  RedisMessageSubscriber redisMessageSubscriber(@Lazy @Qualifier("actorEventSink") ActorEventSink actorEventSink,
+  RedisSagaEventConsumer redisSagaEventConsumer(ActorSystem actorSystem,
+      @Qualifier("sagaShardRegionActor") ActorRef sagaShardRegionActor,
+      MetricsService metricsService,
       @Lazy @Qualifier("nodeStatus") NodeStatus nodeStatus) {
-    return new RedisMessageSubscriber(actorEventSink, nodeStatus);
+    return new RedisSagaEventConsumer(actorSystem, sagaShardRegionActor, metricsService,
+        nodeStatus);
   }
 
   @Bean
-  public MessageListenerAdapter messageListenerAdapter(@Lazy @Qualifier("actorEventSink") ActorEventSink actorEventSink,
-      @Lazy @Qualifier("nodeStatus") NodeStatus nodeStatus) {
-    return new MessageListenerAdapter(redisMessageSubscriber(actorEventSink, nodeStatus));
+  public MessageListenerAdapter messageListenerAdapter(
+      RedisSagaEventConsumer redisSagaEventConsumer) {
+    return new MessageListenerAdapter(redisSagaEventConsumer);
   }
 
   @Bean
-  public RedisMessageListenerContainer redisMessageListenerContainer(RedisConnectionFactory redisConnectionFactory,
-      @Lazy @Qualifier("actorEvetSink") ActorEventSink actorEventSink,
-      @Lazy @Qualifier("nodeStatus") NodeStatus nodeStatus) {
+  public RedisMessageListenerContainer redisMessageListenerContainer(
+      RedisConnectionFactory redisConnectionFactory,
+      RedisSagaEventConsumer redisSagaEventConsumer) {
     RedisMessageListenerContainer redisMessageListenerContainer = new RedisMessageListenerContainer();
 
     redisMessageListenerContainer.setConnectionFactory(redisConnectionFactory);
-    redisMessageListenerContainer.addMessageListener(redisMessageSubscriber(actorEventSink, nodeStatus), channelTopic());
+    redisMessageListenerContainer
+        .addMessageListener(redisSagaEventConsumer, channelTopic());
 
     return redisMessageListenerContainer;
   }
@@ -90,10 +107,16 @@ public class RedisChannelAutoConfiguration {
 
   @Bean
   ChannelTopic channelTopic() {
-    if (logger.isDebugEnabled()) {
-      logger.debug("build channel topic = [{}]", topic);
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("build channel topic = [{}]", topic);
     }
     return new ChannelTopic(topic);
   }
 
+  @Bean
+  public ActorEventChannel redisEventChannel(MetricsService metricsService,
+      @Lazy RedisMessagePublisher redisMessagePublisher) {
+    return new RedisActorEventChannel(metricsService, redisMessagePublisher);
+  }
+
 }
diff --git a/alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisMessagePublisher.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisMessagePublisher.java
similarity index 75%
rename from alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisMessagePublisher.java
rename to alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisMessagePublisher.java
index 31370e3..eca2af4 100644
--- a/alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisMessagePublisher.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisMessagePublisher.java
@@ -17,28 +17,31 @@
 package org.apache.servicecomb.pack.alpha.fsm.channel.redis;
 
 
+import java.lang.invoke.MethodHandles;
 import org.apache.servicecomb.pack.alpha.core.fsm.channel.MessagePublisher;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.springframework.data.redis.core.RedisTemplate;
 import org.springframework.data.redis.listener.ChannelTopic;
 
-public class RedisMessagePublisher implements MessagePublisher {
+public class RedisMessagePublisher implements MessagePublisher<BaseEvent> {
 
-  private static final Logger logger = LoggerFactory.getLogger(RedisMessagePublisher.class);
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
   private RedisTemplate<String, Object> redisTemplate;
   private ChannelTopic channelTopic;
 
-  public RedisMessagePublisher(RedisTemplate<String, Object> redisTemplate, ChannelTopic channelTopic) {
+  public RedisMessagePublisher(RedisTemplate<String, Object> redisTemplate,
+      ChannelTopic channelTopic) {
     this.redisTemplate = redisTemplate;
     this.channelTopic = channelTopic;
   }
 
   @Override
-  public void publish(Object data) {
-    if(logger.isDebugEnabled()) {
-      logger.debug("send message [{}] to [{}]", data, channelTopic.getTopic());
+  public void publish(BaseEvent data) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("send message [{}] to [{}]", data, channelTopic.getTopic());
     }
     redisTemplate.convertAndSend(channelTopic.getTopic(), data);
 
diff --git a/alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisMessageSubscriber.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisSagaEventConsumer.java
similarity index 54%
rename from alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisMessageSubscriber.java
rename to alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisSagaEventConsumer.java
index 0aa171b..f19e768 100644
--- a/alpha/alpha-fsm-channel-redis/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisMessageSubscriber.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/redis/RedisSagaEventConsumer.java
@@ -14,57 +14,55 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+
 package org.apache.servicecomb.pack.alpha.fsm.channel.redis;
 
+import akka.actor.ActorRef;
+import akka.actor.ActorSystem;
+import java.lang.invoke.MethodHandles;
 import org.apache.servicecomb.pack.alpha.core.NodeStatus;
 import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
+import org.apache.servicecomb.pack.alpha.fsm.channel.AbstractEventConsumer;
+import org.apache.servicecomb.pack.alpha.fsm.channel.memory.MemoryActorEventChannel;
+import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.springframework.data.redis.connection.Message;
 import org.springframework.data.redis.connection.MessageListener;
 
-import java.nio.charset.StandardCharsets;
-
-public class RedisMessageSubscriber implements MessageListener {
+public class RedisSagaEventConsumer extends AbstractEventConsumer implements MessageListener {
 
-  private static final Logger logger = LoggerFactory.getLogger(RedisMessageSubscriber.class);
-
-  private ActorEventSink actorEventSink;
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
   private NodeStatus nodeStatus;
-
   private MessageSerializer messageSerializer = new MessageSerializer();
 
-  public RedisMessageSubscriber(ActorEventSink actorEventSink, NodeStatus nodeStatus) {
-    this.actorEventSink = actorEventSink;
+  public RedisSagaEventConsumer(ActorSystem actorSystem, ActorRef sagaShardRegionActor,
+      MetricsService metricsService,
+      NodeStatus nodeStatus) {
+    super(actorSystem, sagaShardRegionActor, metricsService);
     this.nodeStatus = nodeStatus;
   }
 
   @Override
   public void onMessage(Message message, byte[] pattern) {
-    if(nodeStatus.isMaster()) {
-      if (logger.isDebugEnabled()) {
-        logger.debug("pattern = [{}]", new String(pattern, StandardCharsets.UTF_8));
-      }
-
+    if (nodeStatus.isMaster()) {
       messageSerializer.deserialize(message.getBody()).ifPresent(data -> {
-
         BaseEvent event = (BaseEvent) data;
-
-        if (logger.isDebugEnabled()) {
-          logger.debug("event = [{}]", event);
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("event = [{}]", event);
         }
-
         try {
-          actorEventSink.send(event);
+          long begin = System.currentTimeMillis();
+          metricsService.metrics().doActorReceived();
+          sagaShardRegionActor.tell(event, sagaShardRegionActor);
+          long end = System.currentTimeMillis();
+          metricsService.metrics().doActorAccepted();
+          metricsService.metrics().doActorAvgTime(end - begin);
         } catch (Exception e) {
-          logger.error("subscriber Exception = [{}]", e.getMessage(), e);
+          metricsService.metrics().doActorRejected();
+          LOG.error("subscriber Exception = [{}]", e.getMessage(), e);
         }
       });
-    }else{
-      if(logger.isDebugEnabled()){
-        logger.debug("nodeStatus is not master and cancel this time subscribe");
-      }
     }
   }
 }
diff --git a/alpha/pom.xml b/alpha/pom.xml
index 34c43da..6fa969d 100644
--- a/alpha/pom.xml
+++ b/alpha/pom.xml
@@ -33,8 +33,6 @@
   <modules>
     <module>alpha-core</module>
     <module>alpha-fsm</module>
-    <module>alpha-fsm-channel-redis</module>
-    <module>alpha-fsm-channel-kafka</module>
     <module>alpha-benchmark</module>
     <module>alpha-spring-cloud-starter-eureka</module>
     <module>alpha-spring-cloud-starter-consul</module>


[servicecomb-pack] 31/42: SCB-1368 The default value of commit-time-warning is changed to 5s

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit b1d919dc7ed0eb64885d70a3a1c7ea3747bc2c51
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Fri Sep 27 16:01:24 2019 +0800

    SCB-1368 The default value of commit-time-warning is changed to 5s
---
 alpha/alpha-server/src/main/resources/application.yaml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/alpha/alpha-server/src/main/resources/application.yaml b/alpha/alpha-server/src/main/resources/application.yaml
index 44c53af..05bfc20 100644
--- a/alpha/alpha-server/src/main/resources/application.yaml
+++ b/alpha/alpha-server/src/main/resources/application.yaml
@@ -176,7 +176,7 @@ akkaConfig:
         stop-timeout: 30s
         close-timeout: 20s
         commit-timeout: 15s
-        commit-time-warning: 1s
+        commit-time-warning: 5s
         commit-refresh-interval: infinite
         use-dispatcher: "akka.kafka.saga-kafka"
         kafka-clients.enable.auto.commit: false


[servicecomb-pack] 26/42: SCB-1368 Optimize log information

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 8ca383f1c48078b79e76e7dfe25d1ec90596534a
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Fri Sep 27 09:57:17 2019 +0800

    SCB-1368 Optimize log information
---
 .../pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java       | 3 +--
 .../elasticsearch/ElasticsearchTransactionRepository.java         | 8 +++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java
index 52de8ef..fe41fb3 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaMessagePublisher.java
@@ -39,12 +39,11 @@ public class KafkaMessagePublisher implements MessagePublisher<BaseEvent> {
     @Override
     public void publish(BaseEvent data) {
         if(LOG.isDebugEnabled()){
-            LOG.debug("send to kafka {} {} to {}", data.getGlobalTxId(), data.getType(), topic);
+            LOG.debug("send [{}] {} {}", data.getGlobalTxId(), data.getType(), data.getLocalTxId());
         }
         try {
             kafkaTemplate.send(topic, data.getGlobalTxId(), data).get();
         } catch (InterruptedException | ExecutionException | UnsupportedOperationException e) {
-            LOG.error("publish Exception = [{}]", e.getMessage(), e);
             throw new RuntimeException(e);
         }
     }
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/elasticsearch/ElasticsearchTransactionRepository.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/elasticsearch/ElasticsearchTransactionRepository.java
index 1a33ec9..47655e3 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/elasticsearch/ElasticsearchTransactionRepository.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/elasticsearch/ElasticsearchTransactionRepository.java
@@ -243,9 +243,11 @@ public class ElasticsearchTransactionRepository implements TransactionRepository
     metricsService.metrics().doRepositoryAccepted(queries.size());
     long end = System.currentTimeMillis();
     metricsService.metrics().doRepositoryAvgTime((end - begin) / queries.size());
-    LOG.info("save queries={}, received={}, accepted={}", queries.size(),
-        metricsService.metrics().getRepositoryReceived(),
-        metricsService.metrics().getRepositoryAccepted());
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("save queries={}, received={}, accepted={}", queries.size(),
+          metricsService.metrics().getRepositoryReceived(),
+          metricsService.metrics().getRepositoryAccepted());
+    }
     queries.clear();
   }
 


[servicecomb-pack] 22/42: SCB-1368 Added debug info

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 6e409c2d91f22d26e7db42d18bae3af8493ebcb0
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 21 01:13:38 2019 +0800

    SCB-1368 Added debug info
---
 .../pack/alpha/benchmark/SagaEventBenchmark.java       | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/alpha/alpha-benchmark/src/main/java/org/apache/servicecomb/pack/alpha/benchmark/SagaEventBenchmark.java b/alpha/alpha-benchmark/src/main/java/org/apache/servicecomb/pack/alpha/benchmark/SagaEventBenchmark.java
index 09caf10..322aa54 100644
--- a/alpha/alpha-benchmark/src/main/java/org/apache/servicecomb/pack/alpha/benchmark/SagaEventBenchmark.java
+++ b/alpha/alpha-benchmark/src/main/java/org/apache/servicecomb/pack/alpha/benchmark/SagaEventBenchmark.java
@@ -63,7 +63,8 @@ public class SagaEventBenchmark {
     CountDownLatch end = new CountDownLatch(concurrency);
     begin.countDown();
     for (int i = 0; i < concurrency; i++) {
-      Execute execute = new Execute(sender, requests / concurrency, begin, end);
+      String id_prefix = "";
+      Execute execute = new Execute(sender, id_prefix,requests / concurrency, begin, end);
       new Thread(execute).start();
     }
     try {
@@ -109,7 +110,8 @@ public class SagaEventBenchmark {
     // 预热
     if (warmUp > 0) {
       for (int i = 0; i < warmUp; i++) {
-        Execute execute = new Execute(sender, warmUpRequests, begin, end);
+        String id_prefix = "warmup-";
+        Execute execute = new Execute(sender, id_prefix, warmUpRequests, begin, end);
         new Thread(execute).start();
       }
       try {
@@ -131,15 +133,16 @@ public class SagaEventBenchmark {
   }
 
   private class Execute implements Runnable {
-
+    String id_prefix;
     SagaMessageSender sender;
     CountDownLatch begin;
     CountDownLatch end;
     int requests;
 
-    public Execute(SagaMessageSender sender, int requests, CountDownLatch begin,
+    public Execute(SagaMessageSender sender, String id_prefix, int requests, CountDownLatch begin,
         CountDownLatch end) {
       this.sender = sender;
+      this.id_prefix = id_prefix;
       this.requests = requests;
       this.begin = begin;
       this.end = end;
@@ -158,7 +161,12 @@ public class SagaEventBenchmark {
           final String localTxId_3 = UUID.randomUUID().toString();
           try {
             sagaSuccessfulEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream()
-                .forEach(event -> sender.send(event));
+                .forEach(event -> {
+                  if(LOG.isDebugEnabled()){
+                    LOG.debug(event.toString());
+                  }
+                  sender.send(event);
+                });
           } catch (Throwable e) {
             metrics.failedRequestsIncrement();
           } finally {


[servicecomb-pack] 18/42: SCB-1368 Added debug info

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 4fd334a55fd85f0fcdfc23259fa902c1d0a9d4bc
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 21 00:56:34 2019 +0800

    SCB-1368 Added debug info
---
 .../pack/alpha/fsm/spring/integration/akka/SagaDataExtension.java      | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/SagaDataExtension.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/SagaDataExtension.java
index f8b50c7..90fc25a 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/SagaDataExtension.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/SagaDataExtension.java
@@ -90,6 +90,9 @@ public class SagaDataExtension extends AbstractExtensionId<SagaDataExt> {
           .build();
       repositoryChannel.send(record);
       sagaDataMap.remove(globalTxId);
+      if(LOG.isDebugEnabled()){
+        LOG.info("send repository channel {}",globalTxId);
+      }
     }
 
     // Only for Test


[servicecomb-pack] 01/42: SCB-1368 Add akka cluster dependency

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit a68d39850002a4063d937eaa09f5430979b313c4
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Wed Aug 14 17:06:45 2019 +0800

    SCB-1368 Add akka cluster dependency
---
 alpha/alpha-fsm/pom.xml | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/alpha/alpha-fsm/pom.xml b/alpha/alpha-fsm/pom.xml
index 3c143ca..b51bec9 100644
--- a/alpha/alpha-fsm/pom.xml
+++ b/alpha/alpha-fsm/pom.xml
@@ -48,6 +48,16 @@
         <artifactId>akka-persistence_2.12</artifactId>
         <version>${akka.version}</version>
       </dependency>
+      <dependency>
+        <groupId>com.typesafe.akka</groupId>
+        <artifactId>akka-cluster_2.12</artifactId>
+        <version>${akka.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>com.typesafe.akka</groupId>
+        <artifactId>akka-cluster-metrics_2.12</artifactId>
+        <version>${akka.version}</version>
+      </dependency>
     </dependencies>
   </dependencyManagement>
 
@@ -127,7 +137,27 @@
       <artifactId>akka-persistence-redis_2.12</artifactId>
       <version>${akka-persistence-redis.version}</version>
     </dependency>
+    <dependency>
+      <groupId>com.typesafe.akka</groupId>
+      <artifactId>akka-cluster_2.12</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>com.typesafe.akka</groupId>
+      <artifactId>akka-cluster-metrics_2.12</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>com.typesafe.akka</groupId>
+      <artifactId>akka-slf4j_2.12</artifactId>
+    </dependency>
 
+    <!--
+      jmx over http
+      http://0.0.0.0:8090/actuator/jolokia/read/akka:type=Cluster
+    -->
+    <dependency>
+      <groupId>org.jolokia</groupId>
+      <artifactId>jolokia-core</artifactId>
+    </dependency>
     <!-- For testing the artifacts scope are test-->
     <dependency>
       <groupId>org.springframework.boot</groupId>


[servicecomb-pack] 38/42: SCB-1368 Added the license header.

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit e37775a0f6df80aaf374034650ffd15c62b86d6f
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 28 19:12:37 2019 +0800

    SCB-1368 Added the license header.
---
 .../pack/alpha/core/metrics/MetricsBeanTest.java        | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/alpha/alpha-core/src/test/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBeanTest.java b/alpha/alpha-core/src/test/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBeanTest.java
index 232d4fa..a9a32ae 100644
--- a/alpha/alpha-core/src/test/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBeanTest.java
+++ b/alpha/alpha-core/src/test/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBeanTest.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.servicecomb.pack.alpha.core.metrics;
 
 import static org.junit.Assert.assertEquals;


[servicecomb-pack] 13/42: SCB-1368 Add default configuration of Akka

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 93fa5f24c6493c1554274de9425ce40c24a02608
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Tue Sep 10 19:12:41 2019 +0800

    SCB-1368 Add default configuration of Akka
---
 alpha/alpha-server/src/main/resources/application.yaml | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/alpha/alpha-server/src/main/resources/application.yaml b/alpha/alpha-server/src/main/resources/application.yaml
index ed6ce40..98a58ce 100644
--- a/alpha/alpha-server/src/main/resources/application.yaml
+++ b/alpha/alpha-server/src/main/resources/application.yaml
@@ -57,6 +57,9 @@ eureka:
 
 akkaConfig:
   akka:
+    loglevel: INFO
+    loggers: ["akka.event.slf4j.Slf4jLogger"]
+    logging-filter: akka.event.slf4j.Slf4jLoggingFilter
     log-dead-letters: off
     log-dead-letters-during-shutdown: off
     actor:
@@ -70,18 +73,26 @@ akkaConfig:
         plugin: akka.persistence.snapshot-store.local
         local.dir: target/example/snapshots
     remote:
+      watch-failure-detector:
+        acceptable-heartbeat-pause: 6s
       artery:
         enabled: on
         transport: tcp
+        advanced:
+          outbound-message-queue-size: 20000
         canonical:
           hostname: ${alpha.server.host}
           port: 8070
     cluster:
+      auto-down-unreachable-after: "off" # disable automatic downing
+      failure-detector:
+        heartbeat-interval: 3s
+        acceptable-heartbeat-pause: 6s
       seed-nodes: ["akka://alpha-cluster@127.0.0.1:8070"]
       sharding:
         state-store-mode: "persistence"
         remember-entities: true
-        shard-failure-backoff: "5 s"
+        shard-failure-backoff: 5s
 
 management:
   endpoints:


[servicecomb-pack] 35/42: SCB-1368 Update test cases timeout for CI

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 814d2bb660ef6e1d93736148c7fd31d22806040d
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Fri Sep 27 21:17:38 2019 +0800

    SCB-1368 Update test cases timeout for CI
---
 .../pack/alpha/fsm/SagaIntegrationTest.java        | 24 +++++++++++-----------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java b/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java
index 508be9f..59cf828 100644
--- a/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java
+++ b/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java
@@ -82,7 +82,7 @@ public class SagaIntegrationTest {
     SagaEventSender.successfulEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
       memoryActorEventChannel.send(event);
     });
-    await().atMost(2, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system)
           .getLastSagaData();
       return sagaData != null && sagaData.isTerminated()
@@ -110,7 +110,7 @@ public class SagaIntegrationTest {
       memoryActorEventChannel.send(event);
     });
 
-    await().atMost(2, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
       return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()==SagaActorState.COMPENSATED;
     });
@@ -129,7 +129,7 @@ public class SagaIntegrationTest {
     SagaEventSender.middleTxAbortedEvents(globalTxId, localTxId_1, localTxId_2).stream().forEach( event -> {
       memoryActorEventChannel.send(event);
     });
-    await().atMost(2, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
       return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()==SagaActorState.COMPENSATED;
     });
@@ -150,7 +150,7 @@ public class SagaIntegrationTest {
     SagaEventSender.lastTxAbortedEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
       memoryActorEventChannel.send(event);
     });
-    await().atMost(2, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
       return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()==SagaActorState.COMPENSATED;
     });
@@ -172,7 +172,7 @@ public class SagaIntegrationTest {
     SagaEventSender.sagaAbortedEventBeforeTxComponsitedEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
       memoryActorEventChannel.send(event);
     });
-    await().atMost(2, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
       return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()==SagaActorState.COMPENSATED;
     });
@@ -194,7 +194,7 @@ public class SagaIntegrationTest {
     SagaEventSender.receivedRemainingEventAfterFirstTxAbortedEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
       memoryActorEventChannel.send(event);
     });
-    await().atMost(2, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
       return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()==SagaActorState.COMPENSATED;
     });
@@ -216,7 +216,7 @@ public class SagaIntegrationTest {
     SagaEventSender.sagaAbortedEventAfterAllTxEndedsEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
       memoryActorEventChannel.send(event);
     });
-    await().atMost(2, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
       return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()==SagaActorState.COMPENSATED;
     });
@@ -238,7 +238,7 @@ public class SagaIntegrationTest {
     SagaEventSender.omegaSendSagaTimeoutEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
       memoryActorEventChannel.send(event);
     });
-    await().atMost(2, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
       return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()==SagaActorState.SUSPENDED;
     });
@@ -261,7 +261,7 @@ public class SagaIntegrationTest {
     SagaEventSender.sagaActorTriggerTimeoutEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3, timeout).stream().forEach( event -> {
       memoryActorEventChannel.send(event);
     });
-    await().atMost(timeout + 2, SECONDS).until(() -> {
+    await().atMost(timeout + 10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
       return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()==SagaActorState.SUSPENDED;
     });
@@ -283,7 +283,7 @@ public class SagaIntegrationTest {
     SagaEventSender.successfulWithTxConcurrentEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
       memoryActorEventChannel.send(event);
     });
-    await().atMost(2, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
       return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()==SagaActorState.COMMITTED;
     });
@@ -305,7 +305,7 @@ public class SagaIntegrationTest {
     SagaEventSender.successfulWithTxConcurrentCrossEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
       memoryActorEventChannel.send(event);
     });
-    await().atMost(2, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
       return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()==SagaActorState.COMMITTED;
     });
@@ -327,7 +327,7 @@ public class SagaIntegrationTest {
     SagaEventSender.lastTxAbortedEventWithTxConcurrentEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
       memoryActorEventChannel.send(event);
     });
-    await().atMost(2, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
       return sagaData !=null && sagaData.isTerminated() && sagaData.getLastState()==SagaActorState.COMPENSATED;
     });


[servicecomb-pack] 05/42: SCB-1368 Define generic interface

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 55c4a3a6a2cac7a470c62a92588ab01aca96a305
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Tue Sep 10 17:09:49 2019 +0800

    SCB-1368 Define generic interface
---
 .../servicecomb/pack/alpha/core/fsm/channel/MessagePublisher.java     | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/channel/MessagePublisher.java b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/channel/MessagePublisher.java
index a03cfb3..67c741f 100644
--- a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/channel/MessagePublisher.java
+++ b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/channel/MessagePublisher.java
@@ -16,8 +16,8 @@
  */
 package org.apache.servicecomb.pack.alpha.core.fsm.channel;
 
-public interface MessagePublisher {
+public interface MessagePublisher<T> {
 
-    void publish(Object data);
+    void publish(T data);
 
 }


[servicecomb-pack] 29/42: SCB-1368 Ensure message delivery reliability between Kafka and ClusterShardRegion in cluster mode

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 72c2ed628e46a916a516803b6c0391ceb4fb4bbb
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Fri Sep 27 11:53:53 2019 +0800

    SCB-1368 Ensure message delivery reliability between Kafka and ClusterShardRegion in cluster mode
---
 .../servicecomb/pack/alpha/fsm/SagaActor.java      | 90 +++++++++-------------
 .../pack/alpha/fsm/SagaShardRegionActor.java       | 25 ++++--
 .../fsm/channel/kafka/KafkaSagaEventConsumer.java  | 21 +++--
 .../src/main/resources/application.yaml            | 17 ++--
 4 files changed, 81 insertions(+), 72 deletions(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
index 4bd536e..e64b82d 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaActor.java
@@ -75,7 +75,6 @@ public class SagaActor extends
     when(SagaActorState.IDLE,
         matchEvent(SagaStartedEvent.class,
             (event, data) -> {
-              log(event);
               sagaBeginTime = System.currentTimeMillis();
               SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(context().system()).doSagaBeginCounter();
               SagaStartedDomain domainEvent = new SagaStartedDomain(event);
@@ -96,7 +95,6 @@ public class SagaActor extends
     when(SagaActorState.READY,
         matchEvent(TxStartedEvent.class, SagaData.class,
             (event, data) -> {
-              log(event);
               AddTxEventDomain domainEvent = new AddTxEventDomain(event);
               if (data.getExpirationTime() != null) {
                 return goTo(SagaActorState.PARTIALLY_ACTIVE)
@@ -109,14 +107,12 @@ public class SagaActor extends
             }
         ).event(SagaEndedEvent.class,
             (event, data) -> {
-              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.SUSPENDED, SuspendedType.UNPREDICTABLE);
               return goTo(SagaActorState.SUSPENDED)
                   .applying(domainEvent);
             }
         ).event(SagaAbortedEvent.class,
             (event, data) -> {
-              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.SUSPENDED, SuspendedType.UNPREDICTABLE);
               return goTo(SagaActorState.SUSPENDED)
                   .applying(domainEvent);
@@ -132,7 +128,6 @@ public class SagaActor extends
     when(SagaActorState.PARTIALLY_ACTIVE,
         matchEvent(TxEndedEvent.class, SagaData.class,
             (event, data) -> {
-              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               if (data.getExpirationTime() != null) {
                 return goTo(SagaActorState.PARTIALLY_COMMITTED)
@@ -145,7 +140,6 @@ public class SagaActor extends
             }
         ).event(TxStartedEvent.class,
             (event, data) -> {
-              log(event);
               AddTxEventDomain domainEvent = new AddTxEventDomain(event);
               if (data.getExpirationTime() != null) {
                 return stay()
@@ -157,7 +151,6 @@ public class SagaActor extends
             }
         ).event(SagaTimeoutEvent.class,
             (event, data) -> {
-              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.SUSPENDED,
                   SuspendedType.TIMEOUT);
               return goTo(SagaActorState.SUSPENDED)
@@ -165,7 +158,6 @@ public class SagaActor extends
             }
         ).event(TxAbortedEvent.class,
             (event, data) -> {
-              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               return goTo(SagaActorState.FAILED)
                   .applying(domainEvent);
@@ -180,7 +172,6 @@ public class SagaActor extends
     when(SagaActorState.PARTIALLY_COMMITTED,
         matchEvent(TxStartedEvent.class,
             (event, data) -> {
-              log(event);
               AddTxEventDomain domainEvent = new AddTxEventDomain(event);
               if (data.getExpirationTime() != null) {
                 return goTo(SagaActorState.PARTIALLY_ACTIVE)
@@ -193,7 +184,6 @@ public class SagaActor extends
             }
         ).event(TxEndedEvent.class,
             (event, data) -> {
-              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               if (data.getExpirationTime() != null) {
                 return stay()
@@ -205,27 +195,23 @@ public class SagaActor extends
             }
         ).event(SagaTimeoutEvent.class,
             (event, data) -> {
-              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.SUSPENDED, SuspendedType.TIMEOUT);
               return goTo(SagaActorState.SUSPENDED)
                   .applying(domainEvent);
             }
         ).event(SagaEndedEvent.class,
             (event, data) -> {
-              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.COMMITTED);
               return goTo(SagaActorState.COMMITTED)
                   .applying(domainEvent);
             }
         ).event(SagaAbortedEvent.class,
             (event, data) -> {
-              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.FAILED);
               return goTo(SagaActorState.FAILED).applying(domainEvent);
             }
         ).event(TxAbortedEvent.class,
             (event, data) -> {
-              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               return goTo(SagaActorState.FAILED).applying(domainEvent);
             }
@@ -239,14 +225,12 @@ public class SagaActor extends
     when(SagaActorState.FAILED,
         matchEvent(SagaTimeoutEvent.class, SagaData.class,
             (event, data) -> {
-              log(event);
               SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.SUSPENDED, SuspendedType.TIMEOUT);
               return goTo(SagaActorState.SUSPENDED)
                   .applying(domainEvent);
             }
         ).event(TxCompensatedEvent.class, SagaData.class,
             (event, data) -> {
-              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               return stay().applying(domainEvent).andThen(exec(_data -> {
                 self().tell(ComponsitedCheckEvent.builder().build(), self());
@@ -254,7 +238,6 @@ public class SagaActor extends
             }
         ).event(ComponsitedCheckEvent.class, SagaData.class,
             (event, data) -> {
-              log(event);
               if (hasCompensationSentTx(data) || !data.isTerminated()) {
                 return stay();
               } else {
@@ -266,7 +249,6 @@ public class SagaActor extends
             }
         ).event(SagaAbortedEvent.class, SagaData.class,
             (event, data) -> {
-              log(event);
               data.setTerminated(true);
               if (hasCommittedTx(data)) {
                 SagaEndedDomain domainEvent = new SagaEndedDomain(event, SagaActorState.FAILED);
@@ -285,13 +267,11 @@ public class SagaActor extends
             }
         ).event(TxStartedEvent.class, SagaData.class,
             (event, data) -> {
-              log(event);
               AddTxEventDomain domainEvent = new AddTxEventDomain(event);
               return stay().applying(domainEvent);
             }
         ).event(TxEndedEvent.class, SagaData.class,
             (event, data) -> {
-              log(event);
               UpdateTxEventDomain domainEvent = new UpdateTxEventDomain(event);
               return stay().applying(domainEvent).andThen(exec(_data -> {
                 TxEntity txEntity = _data.getTxEntityMap().get(event.getLocalTxId());
@@ -310,8 +290,7 @@ public class SagaActor extends
     when(SagaActorState.COMMITTED,
         matchEvent(org.apache.servicecomb.pack.alpha.core.fsm.event.internal.StopEvent.class,
             (event, data) -> {
-              log(event);
-              beforeStop(stateName(), data);
+              beforeStop(event, stateName(), data);
               return stop();
             }
         )
@@ -320,8 +299,7 @@ public class SagaActor extends
     when(SagaActorState.SUSPENDED,
         matchEvent(org.apache.servicecomb.pack.alpha.core.fsm.event.internal.StopEvent.class,
             (event, data) -> {
-              log(event);
-              beforeStop(stateName(), data);
+              beforeStop(event, stateName(), data);
               return stop();
             }
         )
@@ -330,8 +308,7 @@ public class SagaActor extends
     when(SagaActorState.COMPENSATED,
         matchEvent(org.apache.servicecomb.pack.alpha.core.fsm.event.internal.StopEvent.class,
             (event, data) -> {
-              log(event);
-              beforeStop(stateName(), data);
+              beforeStop(event, stateName(), data);
               return stop();
             }
         )
@@ -339,7 +316,9 @@ public class SagaActor extends
 
     whenUnhandled(
         matchAnyEvent((event, data) -> {
-          LOG.error("Unhandled event {}", event);
+          if (event instanceof BaseEvent){
+            LOG.error("Unhandled event {}", event);
+          }
           return stay();
         })
     );
@@ -352,33 +331,29 @@ public class SagaActor extends
                 .putSagaData(stateData().getGlobalTxId(), stateData());
           }
           if (LOG.isDebugEnabled()) {
-            LOG.debug("transition {} {} -> {}", stateData().getGlobalTxId(), from, to);
+            LOG.debug("transition [{}] {} -> {}", stateData().getGlobalTxId(), from, to);
           }
           if (to == SagaActorState.COMMITTED ||
               to == SagaActorState.SUSPENDED ||
               to == SagaActorState.COMPENSATED) {
             self().tell(org.apache.servicecomb.pack.alpha.core.fsm.event.internal.StopEvent.builder().build(), self());
           }
-          LOG.info("transition {} {} -> {}", stateData().getGlobalTxId(), from, to);
         })
     );
 
     onTermination(
         matchStop(
             Normal(), (state, data) -> {
-              if (LOG.isDebugEnabled()) {
-                LOG.debug("saga actor stopped {} {}", getSelf(), state);
-              }
-              LOG.info("stopped {} {}", data.getGlobalTxId(), state);
+              LOG.info("stopped [{}] {}", data.getGlobalTxId(), state);
             }
         )
     );
 
   }
 
-  private void beforeStop(SagaActorState state, SagaData data){
+  private void beforeStop(BaseEvent event, SagaActorState state, SagaData data){
     if (LOG.isDebugEnabled()) {
-      LOG.debug("stop {} {}", data.getGlobalTxId(), state);
+      LOG.debug("stop [{}] {}", data.getGlobalTxId(), state);
     }
     try{
       sagaEndTime = System.currentTimeMillis();
@@ -394,11 +369,8 @@ public class SagaActor extends
       // destroy self from cluster shard region
       getContext().getParent()
           .tell(new ShardRegion.Passivate(PoisonPill.getInstance()), getSelf());
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("destroy saga actor {} from cluster shard region", getSelf());
-      }
 
-      // clear self mailbox from persistence
+      //  clear self mailbox from persistence
       //  已经停止的Actor使用以下两个命令清理,但是 highestSequenceNr 不会被删除,需要手工清理
       //  以下基于 journal-redis 说明:
       //    假设 globalTxId=ed2cdb9c-e86c-4b01-9f43-8e34704e7694, 那么在 Redis 中会生成三个 key
@@ -418,10 +390,30 @@ public class SagaActor extends
       //      并删除 journal:persisted:item:highestSequenceNr
       //
       //  目前可以看到的解释是 https://github.com/akka/akka/issues/21181
+      //
+      //  Lua script akka-persistence-redis-clean.lua
+
+      //  local ids = redis.call('smembers','journal:persistenceIds');
+      //  local delkeys = {};
+      //  for k, v in pairs(ids) do
+      //    local jpid = 'journal:persisted:' .. v;
+      //    local jpidnr = 'journal:persisted:' .. v .. ':highestSequenceNr';
+      //    local hasjpid  = redis.call('exists',jpid);
+      //    if(hasjpid == 0)
+      //    then
+      //      local hasjpidnr  = redis.call('exists',jpidnr);
+      //      if(hasjpidnr == 1)
+      //      then
+      //        redis.call('del',jpidnr);
+      //        table.insert(delkeys,jpid);
+      //      end
+      //    end
+      //  end
+      //  return delkeys;
       deleteMessages(lastSequenceNr());
       deleteSnapshot(snapshotSequenceNr());
     }catch(Exception e){
-      LOG.error("stop {} fail",data.getGlobalTxId());
+      LOG.error("stop [{}] fail",data.getGlobalTxId());
       throw e;
     }
   }
@@ -430,11 +422,10 @@ public class SagaActor extends
   public SagaData applyEvent(DomainEvent event, SagaData data) {
     try{
       if (this.recoveryRunning()) {
-        LOG.info("SagaActor recovery {}",event.getEvent());
+        LOG.info("recovery {}",event.getEvent());
       }else if (LOG.isDebugEnabled()) {
-        LOG.debug("SagaActor apply event {}", event.getEvent());
+        LOG.debug("persistence {}", event.getEvent());
       }
-      // log event to SagaData
       if (event.getEvent() != null && !(event
           .getEvent() instanceof ComponsitedCheckEvent)) {
         data.logEvent(event.getEvent());
@@ -508,8 +499,9 @@ public class SagaActor extends
         }
       }
     }catch (Exception ex){
-      LOG.error("SagaActor apply event {}", event.getEvent());
-      beforeStop(SagaActorState.SUSPENDED, data);
+      LOG.error("apply {}", event.getEvent(), ex);
+      LOG.error(ex.getMessage(), ex);
+      beforeStop(event.getEvent(), SagaActorState.SUSPENDED, data);
       stop();
       //TODO 增加 SagaActor 处理失败指标
     }
@@ -519,7 +511,7 @@ public class SagaActor extends
   @Override
   public void onRecoveryCompleted() {
     if(stateName() != SagaActorState.IDLE){
-      LOG.info("SagaActor {} recovery completed, state={}", stateData().getGlobalTxId(), stateName());
+      LOG.info("recovery completed [{}] state={}", stateData().getGlobalTxId(), stateName());
     }
   }
 
@@ -585,10 +577,4 @@ public class SagaActor extends
       }
     }
   }
-
-  private void log(BaseEvent event) {
-    if (LOG.isDebugEnabled()) {
-      LOG.debug(event.toString());
-    }
-  }
 }
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
index 6e39033..daa4ee4 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
@@ -24,11 +24,15 @@ import akka.actor.Props;
 import akka.cluster.sharding.ClusterSharding;
 import akka.cluster.sharding.ClusterShardingSettings;
 import akka.cluster.sharding.ShardRegion;
+import java.lang.invoke.MethodHandles;
 import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class SagaShardRegionActor extends AbstractActor {
 
-  private final ActorRef workerRegion;
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+  private final ActorRef sagaActorRegion;
 
   static ShardRegion.MessageExtractor messageExtractor = new ShardRegion.MessageExtractor() {
     @Override
@@ -47,7 +51,7 @@ public class SagaShardRegionActor extends AbstractActor {
 
     @Override
     public String shardId(Object message) {
-      int numberOfShards = 100;
+      int numberOfShards = 10; // NOTE: Greater than the number of alpha nodes
       if (message instanceof BaseEvent) {
         String actorId = ((BaseEvent) message).getGlobalTxId();
         return String.valueOf(actorId.hashCode() % numberOfShards);
@@ -63,9 +67,9 @@ public class SagaShardRegionActor extends AbstractActor {
   public SagaShardRegionActor() {
     ActorSystem system = getContext().getSystem();
     ClusterShardingSettings settings = ClusterShardingSettings.create(system);
-    workerRegion = ClusterSharding.get(system)
+    sagaActorRegion = ClusterSharding.get(system)
         .start(
-            "saga-shard-region-actor",
+            SagaActor.class.getSimpleName(),
             Props.create(SagaActor.class),
             settings,
             messageExtractor);
@@ -74,8 +78,17 @@ public class SagaShardRegionActor extends AbstractActor {
   @Override
   public Receive createReceive() {
     return receiveBuilder()
-        .matchAny(msg -> {
-          workerRegion.tell(msg, getSelf());
+        .matchAny(event -> {
+          final BaseEvent evt = (BaseEvent) event;
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("=> [{}] {} {}", evt.getGlobalTxId(), evt.getType(), evt.getLocalTxId());
+          }
+
+          sagaActorRegion.tell(event, getSelf());
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("<= [{}] {} {}", evt.getGlobalTxId(), evt.getType(), evt.getLocalTxId());
+          }
+          getSender().tell("confirm", getSelf());
         })
         .build();
   }
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java
index 7790c12..8ee2d40 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaSagaEventConsumer.java
@@ -23,9 +23,11 @@ import akka.kafka.ConsumerMessage;
 import akka.kafka.ConsumerSettings;
 import akka.kafka.Subscriptions;
 import akka.kafka.javadsl.Consumer;
+import akka.pattern.Patterns;
 import akka.stream.ActorMaterializer;
 import akka.stream.Materializer;
 import akka.stream.javadsl.Sink;
+import akka.util.Timeout;
 import com.fasterxml.jackson.databind.ObjectMapper;
 import com.typesafe.config.Config;
 import java.lang.invoke.MethodHandles;
@@ -39,6 +41,9 @@ import org.apache.servicecomb.pack.alpha.fsm.channel.AbstractEventConsumer;
 import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+import scala.concurrent.Await;
+import scala.concurrent.Future;
+import scala.concurrent.duration.Duration;
 
 public class KafkaSagaEventConsumer extends AbstractEventConsumer {
 
@@ -64,10 +69,10 @@ public class KafkaSagaEventConsumer extends AbstractEventConsumer {
             .withProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "StringDeserializer.class")
             .withProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "StringDeserializer.class");
     Consumer.committableSource(consumerSettings, Subscriptions.topics(topic))
-        .mapAsync(10, event -> {
+        .mapAsync(20, event -> {
           BaseEvent bean = jsonMapper.readValue(event.record().value(), BaseEvent.class);
           if (LOG.isDebugEnabled()) {
-            LOG.debug("kafka receive {} {}", bean.getGlobalTxId(), bean.getType());
+            LOG.debug("receive [{}] {} {}", bean.getGlobalTxId(), bean.getType(), bean.getLocalTxId());
           }
           return sendSagaActor(bean).thenApply(done -> event.committableOffset());
         })
@@ -76,7 +81,7 @@ public class KafkaSagaEventConsumer extends AbstractEventConsumer {
             ConsumerMessage::createCommittableOffsetBatch,
             ConsumerMessage.CommittableOffsetBatch::updated
         )
-        .mapAsync(10, offset -> offset.commitJavadsl())
+        .mapAsync(20, offset -> offset.commitJavadsl())
         .to(Sink.ignore())
         .run(materializer);
   }
@@ -85,14 +90,14 @@ public class KafkaSagaEventConsumer extends AbstractEventConsumer {
     try {
       long begin = System.currentTimeMillis();
       metricsService.metrics().doActorReceived();
-      sagaShardRegionActor.tell(event, sagaShardRegionActor);
+      // Use the synchronous method call to ensure that Kafka's Offset is set after the delivery is successful.
+      Timeout timeout = new Timeout(Duration.create(10, "seconds"));
+      Future<Object> future = Patterns.ask(sagaShardRegionActor, event, timeout);
+      Await.result(future, timeout.duration());
       long end = System.currentTimeMillis();
       metricsService.metrics().doActorAccepted();
       metricsService.metrics().doActorAvgTime(end - begin);
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("send saga actor {} {}", event, event.getType());
-      }
-      return CompletableFuture.completedFuture("");
+      return CompletableFuture.completedFuture("OK");
     } catch (Exception ex) {
       LOG.error(ex.getMessage(),ex);
       metricsService.metrics().doActorRejected();
diff --git a/alpha/alpha-server/src/main/resources/application.yaml b/alpha/alpha-server/src/main/resources/application.yaml
index 664f692..44c53af 100644
--- a/alpha/alpha-server/src/main/resources/application.yaml
+++ b/alpha/alpha-server/src/main/resources/application.yaml
@@ -88,11 +88,11 @@ akkaConfig:
       failure-detector:
         heartbeat-interval: 3s
         acceptable-heartbeat-pause: 6s
-      seed-nodes: ["akka://alpha-cluster@127.0.0.1:8070"]
-      sharding:
-        state-store-mode: "ddata" #ddata,persistence
-        remember-entities: true
-        shard-failure-backoff: 5s
+      seed-nodes: ["akka://alpha-cluster@0.0.0.0:8070"]
+    sharding:
+      state-store-mode: ddata
+      remember-entities: "on"
+      shard-failure-backoff: 5s
 
 management:
   endpoints:
@@ -160,11 +160,16 @@ akkaConfig:
   akka:
     actor:
       provider: cluster
-    persistence: # redis persistence
+    persistence:
+      at-least-once-delivery:
+        redeliver-interval: 10s
+        redelivery-burst-limit: 2000
       journal:
         plugin: akka-persistence-redis.journal
       snapshot-store:
         plugin: akka-persistence-redis.snapshot
+    sharding:
+      state-store-mode: persistence
     kafka:
       consumer:
         poll-interval: 50ms


[servicecomb-pack] 10/42: SCB-1368 Indicate sub types of serializable polymorphic types

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit b5c2d1c1bf495fb6ca5be34522d15c7b8514eed0
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Tue Sep 10 17:30:17 2019 +0800

    SCB-1368 Indicate sub types of serializable polymorphic types
---
 .../pack/alpha/core/fsm/event/base/BaseEvent.java  | 30 +++++++++++++++++++++-
 1 file changed, 29 insertions(+), 1 deletion(-)

diff --git a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java
index d0d8859..0f13c6d 100644
--- a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java
+++ b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java
@@ -17,11 +17,35 @@
 
 package org.apache.servicecomb.pack.alpha.core.fsm.event.base;
 
+import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+import com.fasterxml.jackson.annotation.JsonSubTypes;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+import com.fasterxml.jackson.core.JsonProcessingException;
 import com.fasterxml.jackson.databind.ObjectMapper;
 import java.io.Serializable;
 import java.util.Date;
 import java.util.Map;
-
+import org.apache.servicecomb.pack.alpha.core.fsm.event.SagaAbortedEvent;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.SagaEndedEvent;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.SagaStartedEvent;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.SagaTimeoutEvent;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.TxAbortedEvent;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.TxCompensatedEvent;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.TxEndedEvent;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.TxStartedEvent;
+
+@JsonIgnoreProperties(ignoreUnknown = true)
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY)
+@JsonSubTypes({
+    @JsonSubTypes.Type(value = SagaStartedEvent.class, name = "SagaStartedEvent"),
+    @JsonSubTypes.Type(value = SagaEndedEvent.class, name = "SagaEndedEvent"),
+    @JsonSubTypes.Type(value = SagaAbortedEvent.class, name = "SagaAbortedEvent"),
+    @JsonSubTypes.Type(value = SagaTimeoutEvent.class, name = "SagaTimeoutEvent"),
+    @JsonSubTypes.Type(value = TxStartedEvent.class, name = "TxStartedEvent"),
+    @JsonSubTypes.Type(value = TxEndedEvent.class, name = "TxEndedEvent"),
+    @JsonSubTypes.Type(value = TxAbortedEvent.class, name = "TxAbortedEvent"),
+    @JsonSubTypes.Type(value = TxCompensatedEvent.class, name = "TxCompensatedEvent")
+})
 public abstract class BaseEvent implements Serializable {
   private final ObjectMapper mapper = new ObjectMapper();
   private String serviceName;
@@ -89,6 +113,9 @@ public abstract class BaseEvent implements Serializable {
 
   @Override
   public String toString() {
+    try {
+      return mapper.writeValueAsString(this);
+    } catch (JsonProcessingException e) {
     return this.getClass().getSimpleName()+"{" +
         "serviceName='" + serviceName + '\'' +
         ", instanceId='" + instanceId + '\'' +
@@ -97,6 +124,7 @@ public abstract class BaseEvent implements Serializable {
         ", localTxId='" + localTxId + '\'' +
         ", createTime=" + createTime +
         '}';
+    }
   }
 
   public Map<String,Object> toMap() throws Exception {


[servicecomb-pack] 09/42: SCB-1368 Add default configuration of Akka cluster

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 1bd3518f3c54d3278c318aae1e48e9dbcfe5d55d
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Tue Sep 10 17:28:46 2019 +0800

    SCB-1368 Add default configuration of Akka cluster
---
 .../src/main/resources/application.yaml            | 103 ++++++++++++++++-----
 1 file changed, 78 insertions(+), 25 deletions(-)

diff --git a/alpha/alpha-server/src/main/resources/application.yaml b/alpha/alpha-server/src/main/resources/application.yaml
index 23ae3e2..ed6ce40 100644
--- a/alpha/alpha-server/src/main/resources/application.yaml
+++ b/alpha/alpha-server/src/main/resources/application.yaml
@@ -16,16 +16,20 @@
 ## ---------------------------------------------------------------------------
 server:
   port: 8090
+  host: 0.0.0.0
 
 alpha:
   server:
-    host: 0.0.0.0
+    host: ${server.host}
     port: 8080
   feature:
     akka:
       enabled: false
       channel:
         type: memory
+      transaction:
+        repository:
+          type: elasticsearch
 
 spring:
   datasource:
@@ -52,20 +56,32 @@ eureka:
 
 
 akkaConfig:
-  # persistence
-  akka.persistence.journal.plugin: akka.persistence.journal.inmem
-  akka.persistence.journal.leveldb.dir: target/example/journal
-  akka.persistence.snapshot-store.plugin: akka.persistence.snapshot-store.local
-  akka.persistence.snapshot-store.local.dir: target/example/snapshots
-  # cluster
-  akka.actor.provider: cluster
-  akka.remote.log-remote-lifecycle-events: info
-  akka.remote.netty.tcp.hostname: 127.0.0.1
-  akka.remote.netty.tcp.port: 8070
-  akka.cluster.seed-nodes: ["akka.tcp://alpha-akka@127.0.0.1:8070"]
-  #
-  akka.extensions: ["akka.cluster.metrics.ClusterMetricsExtension"]
-
+  akka:
+    log-dead-letters: off
+    log-dead-letters-during-shutdown: off
+    actor:
+      warn-about-java-serializer-usage: false
+      provider: cluster
+    persistence:
+      journal:
+        plugin: akka.persistence.journal.inmem
+        leveldb.dir: target/example/journal
+      snapshot-store:
+        plugin: akka.persistence.snapshot-store.local
+        local.dir: target/example/snapshots
+    remote:
+      artery:
+        enabled: on
+        transport: tcp
+        canonical:
+          hostname: ${alpha.server.host}
+          port: 8070
+    cluster:
+      seed-nodes: ["akka://alpha-cluster@127.0.0.1:8070"]
+      sharding:
+        state-store-mode: "persistence"
+        remember-entities: true
+        shard-failure-backoff: "5 s"
 
 management:
   endpoints:
@@ -120,14 +136,51 @@ spring:
 
 ---
 spring:
-  profiles: akka-persistence-redis
+  profiles: cluster
+
+alpha:
+  feature:
+    akka:
+      enabled: true
+      channel:
+        type: kafka
+
 akkaConfig:
-  akka.persistence.journal.plugin: akka-persistence-redis.journal
-  akka.persistence.snapshot-store.plugin: akka-persistence-redis.snapshot
-  akka-persistence-redis:
-    redis:
-      mode: simple
-      host: localhost
-      port: 6379
-      database: 0
-      #password:
+  akka:
+    actor:
+      provider: cluster
+    persistence: # redis persistence
+      journal:
+        plugin: akka-persistence-redis.journal
+      snapshot-store:
+        plugin: akka-persistence-redis.snapshot
+    kafka:
+      consumer:
+        poll-interval: 50ms
+        stop-timeout: 30s
+        close-timeout: 20s
+        commit-timeout: 15s
+        commit-time-warning: 1s
+        commit-refresh-interval: infinite
+        use-dispatcher: "akka.kafka.default-dispatcher"
+        kafka-clients.enable.auto.commit: false
+        wait-close-partition: 500ms
+        position-timeout: 5s
+        offset-for-times-timeout: 5s
+        metadata-request-timeout: 5s
+        eos-draining-check-interval: 30ms
+        partition-handler-warning: 5s
+        connection-checker.enable: false
+        connection-checker.max-retries: 3
+        connection-checker.check-interval: 15s
+        connection-checker.backoff-factor: 2.0
+        max-batch: 1000
+        max-interval: 10s
+        parallelism: 1
+
+akka-persistence-redis:
+  redis:
+    mode: "simple"
+    host: "127.0.0.1"
+    port: 6379
+    database: 0
\ No newline at end of file


[servicecomb-pack] 12/42: SCB-1368 Fix log information bug

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 92264de464ba175c6d2884750a95c83121dc32d9
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Tue Sep 10 19:11:49 2019 +0800

    SCB-1368 Fix log information bug
---
 .../repository/elasticsearch/ElasticsearchTransactionRepository.java    | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/elasticsearch/ElasticsearchTransactionRepository.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/elasticsearch/ElasticsearchTransactionRepository.java
index 15c9cbd..1a33ec9 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/elasticsearch/ElasticsearchTransactionRepository.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/elasticsearch/ElasticsearchTransactionRepository.java
@@ -243,10 +243,10 @@ public class ElasticsearchTransactionRepository implements TransactionRepository
     metricsService.metrics().doRepositoryAccepted(queries.size());
     long end = System.currentTimeMillis();
     metricsService.metrics().doRepositoryAvgTime((end - begin) / queries.size());
-    queries.clear();
     LOG.info("save queries={}, received={}, accepted={}", queries.size(),
         metricsService.metrics().getRepositoryReceived(),
         metricsService.metrics().getRepositoryAccepted());
+    queries.clear();
   }
 
   class RefreshTimer implements Runnable {


[servicecomb-pack] 06/42: SCB-1368 Refactoring memory channel

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 03d86caf6c0dd728234da6c413f4683445013855
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Tue Sep 10 17:21:49 2019 +0800

    SCB-1368 Refactoring memory channel
---
 .../memory/MemoryChannelAutoConfiguration.java     | 62 +++++++++++++++++++
 .../channel/memory/MemorySagaEventConsumer.java    | 69 ++++++++++++++++++++++
 2 files changed, 131 insertions(+)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/memory/MemoryChannelAutoConfiguration.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/memory/MemoryChannelAutoConfiguration.java
new file mode 100644
index 0000000..7ad7878
--- /dev/null
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/memory/MemoryChannelAutoConfiguration.java
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.servicecomb.pack.alpha.fsm.channel.memory;
+
+import akka.actor.ActorRef;
+import akka.actor.ActorSystem;
+import java.lang.invoke.MethodHandles;
+import javax.annotation.PostConstruct;
+import org.apache.servicecomb.pack.alpha.core.fsm.channel.ActorEventChannel;
+import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.springframework.beans.factory.annotation.Qualifier;
+import org.springframework.beans.factory.annotation.Value;
+import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
+import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
+import org.springframework.context.annotation.Bean;
+import org.springframework.context.annotation.Configuration;
+
+@Configuration
+@ConditionalOnProperty(value = "alpha.feature.akka.channel.type", havingValue = "memory", matchIfMissing = true)
+public class MemoryChannelAutoConfiguration {
+
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  @Value("${alpha.feature.akka.channel.memory.size:-1}")
+  int memoryEventChannelMemorySize;
+
+  @PostConstruct
+  public void init(){
+    LOG.info("Memory Channel Init");
+  }
+
+  @Bean(name = "memoryEventChannel")
+  @ConditionalOnMissingBean(ActorEventChannel.class)
+  public ActorEventChannel memoryEventChannel(MetricsService metricsService) {
+    return new MemoryActorEventChannel(metricsService, memoryEventChannelMemorySize);
+  }
+
+  @Bean
+  MemorySagaEventConsumer sagaEventMemoryConsumer(ActorSystem actorSystem,
+      @Qualifier("sagaShardRegionActor") ActorRef sagaShardRegionActor,
+      MetricsService metricsService,
+      @Qualifier("memoryEventChannel") ActorEventChannel actorEventChannel) {
+    return new MemorySagaEventConsumer(actorSystem, sagaShardRegionActor, metricsService,
+        (MemoryActorEventChannel) actorEventChannel);
+  }
+}
\ No newline at end of file
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/memory/MemorySagaEventConsumer.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/memory/MemorySagaEventConsumer.java
new file mode 100644
index 0000000..f2af56b
--- /dev/null
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/memory/MemorySagaEventConsumer.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.servicecomb.pack.alpha.fsm.channel.memory;
+
+import akka.actor.ActorRef;
+import akka.actor.ActorSystem;
+import java.lang.invoke.MethodHandles;
+import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
+import org.apache.servicecomb.pack.alpha.fsm.channel.AbstractEventConsumer;
+import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class MemorySagaEventConsumer extends AbstractEventConsumer {
+
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+  final MemoryActorEventChannel channel;
+
+  public MemorySagaEventConsumer(ActorSystem actorSystem, ActorRef sagaShardRegionActor, MetricsService metricsService,
+      MemoryActorEventChannel channel) {
+    super(actorSystem, sagaShardRegionActor, metricsService);
+    this.channel = channel;
+    new Thread(new MemorySagaEventConsumer.EventConsumer(), "MemorySagaEventConsumer").start();
+  }
+
+  class EventConsumer implements Runnable {
+
+    @Override
+    public void run() {
+      while (true) {
+        try {
+          BaseEvent event = channel.getEventQueue().peek();
+          if (event != null) {
+            if (LOG.isDebugEnabled()) {
+              LOG.debug("event {}", event);
+            }
+            long begin = System.currentTimeMillis();
+            metricsService.metrics().doActorReceived();
+            sagaShardRegionActor.tell(event, sagaShardRegionActor);
+            long end = System.currentTimeMillis();
+            metricsService.metrics().doActorAccepted();
+            metricsService.metrics().doActorAvgTime(end - begin);
+            channel.getEventQueue().poll();
+          } else {
+            Thread.sleep(10);
+          }
+        } catch (Exception ex) {
+          metricsService.metrics().doActorRejected();
+          LOG.error(ex.getMessage(), ex);
+        }
+      }
+    }
+  }
+}


[servicecomb-pack] 21/42: SCB-1368 Change ShardRegion Actor name to saga-shard-region-actor

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 5693b81d736386a351848ee3f5e91ec6efb7db7a
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 21 01:09:23 2019 +0800

    SCB-1368 Change ShardRegion Actor name to saga-shard-region-actor
---
 .../pack/alpha/fsm/SagaShardRegionActor.java          | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
index d43ba85..6e39033 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/SagaShardRegionActor.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.servicecomb.pack.alpha.fsm;
 
 import akka.actor.AbstractActor;
@@ -48,7 +65,7 @@ public class SagaShardRegionActor extends AbstractActor {
     ClusterShardingSettings settings = ClusterShardingSettings.create(system);
     workerRegion = ClusterSharding.get(system)
         .start(
-            "saga-actor",
+            "saga-shard-region-actor",
             Props.create(SagaActor.class),
             settings,
             messageExtractor);


[servicecomb-pack] 14/42: SCB-1368 Log4j2 disable the automatic shutdown hook

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit ae45ed341076dcbb55d82c019cd47fe450388a98
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 21 00:38:49 2019 +0800

    SCB-1368 Log4j2 disable the automatic shutdown hook
---
 alpha/alpha-server/src/main/resources/log4j2.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/alpha/alpha-server/src/main/resources/log4j2.xml b/alpha/alpha-server/src/main/resources/log4j2.xml
index a5ec6c6..bb03702 100644
--- a/alpha/alpha-server/src/main/resources/log4j2.xml
+++ b/alpha/alpha-server/src/main/resources/log4j2.xml
@@ -16,7 +16,7 @@
   ~ limitations under the License.
   -->
 
-<Configuration status="WARN" monitorInterval="30">
+<Configuration status="WARN" monitorInterval="30" shutdownHook="disable">
   <Properties>
     <Property name="LOG_PATTERN">
       %d{yyyy-MM-dd HH:mm:ss.SSS} %5p ${hostName} --- [%15.15t] %-40.40c{1.} : %m%n%ex


[servicecomb-pack] 37/42: SCB-1368 Fix metric statistics bug

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit b689c0743819d9ac829da7748baa4228b985eb78
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 28 18:32:59 2019 +0800

    SCB-1368 Fix metric statistics bug
---
 .../pack/alpha/core/metrics/MetricsBean.java       |  3 --
 .../pack/alpha/core/metrics/MetricsBeanTest.java   | 57 ++++++++++++++++++++++
 2 files changed, 57 insertions(+), 3 deletions(-)

diff --git a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBean.java b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBean.java
index fc45975..6146605 100644
--- a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBean.java
+++ b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBean.java
@@ -50,7 +50,6 @@ public class MetricsBean {
   }
 
   public void doEventRejected() {
-    eventReceived.decrementAndGet();
     eventRejected.incrementAndGet();
   }
 
@@ -71,7 +70,6 @@ public class MetricsBean {
   }
 
   public void doActorRejected() {
-    actorReceived.decrementAndGet();
     actorRejected.incrementAndGet();
   }
 
@@ -124,7 +122,6 @@ public class MetricsBean {
   }
 
   public void doRepositoryRejected() {
-    repositoryReceived.decrementAndGet();
     repositoryRejected.incrementAndGet();
   }
 
diff --git a/alpha/alpha-core/src/test/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBeanTest.java b/alpha/alpha-core/src/test/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBeanTest.java
new file mode 100644
index 0000000..232d4fa
--- /dev/null
+++ b/alpha/alpha-core/src/test/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBeanTest.java
@@ -0,0 +1,57 @@
+package org.apache.servicecomb.pack.alpha.core.metrics;
+
+import static org.junit.Assert.assertEquals;
+
+import org.junit.Test;
+
+public class MetricsBeanTest {
+
+  @Test
+  public void testEventRecevie(){
+    MetricsBean metric = new MetricsBean();
+    //accepted
+    metric.doEventReceived();
+    metric.doEventAccepted();
+    assertEquals(metric.getEventReceived(),1l);
+    assertEquals(metric.getEventAccepted(),1l);
+    //rejected
+    metric.doEventReceived();
+    metric.doEventRejected();
+    assertEquals(metric.getEventReceived(),2l);
+    assertEquals(metric.getEventAccepted(),1l);
+    assertEquals(metric.getEventRejected(),1l);
+  }
+
+  @Test
+  public void testActorReceive(){
+    MetricsBean metric = new MetricsBean();
+    //accepted
+    metric.doActorReceived();
+    metric.doActorAccepted();
+    assertEquals(metric.getActorReceived(),1l);
+    assertEquals(metric.getActorAccepted(),1l);
+    //rejected
+    metric.doActorReceived();
+    metric.doActorRejected();
+    assertEquals(metric.getActorReceived(),2l);
+    assertEquals(metric.getActorAccepted(),1l);
+    assertEquals(metric.getActorRejected(),1l);
+  }
+
+  @Test
+  public void testRepositoryReceive(){
+    MetricsBean metric = new MetricsBean();
+    //accepted
+    metric.doRepositoryReceived();
+    metric.doRepositoryAccepted();
+    assertEquals(metric.getRepositoryReceived(),1l);
+    assertEquals(metric.getRepositoryAccepted(),1l);
+    //rejected
+    metric.doRepositoryReceived();
+    metric.doRepositoryRejected();
+    assertEquals(metric.getRepositoryReceived(),2l);
+    assertEquals(metric.getRepositoryAccepted(),1l);
+    assertEquals(metric.getRepositoryRejected(),1l);
+  }
+
+}


[servicecomb-pack] 16/42: SCB-1368 Polishing

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 94a9287be5d6ac6108b37451ce5fbc6c529c1b62
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 21 00:48:33 2019 +0800

    SCB-1368 Polishing
---
 .../pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java      | 1 +
 1 file changed, 1 insertion(+)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java
index ad44641..a96621c 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/channel/kafka/KafkaChannelAutoConfiguration.java
@@ -14,6 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+
 package org.apache.servicecomb.pack.alpha.fsm.channel.kafka;
 
 import akka.actor.ActorRef;


[servicecomb-pack] 28/42: SCB-1368 Added serialVersionUID

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit c0224c1d9d31ede6a3619f07108cc0d103c3cc43
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Fri Sep 27 11:03:52 2019 +0800

    SCB-1368 Added serialVersionUID
---
 .../apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java    | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java
index 0f13c6d..18ff36a 100644
--- a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java
+++ b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java
@@ -47,6 +47,8 @@ import org.apache.servicecomb.pack.alpha.core.fsm.event.TxStartedEvent;
     @JsonSubTypes.Type(value = TxCompensatedEvent.class, name = "TxCompensatedEvent")
 })
 public abstract class BaseEvent implements Serializable {
+
+  private static final long serialVersionUID = 7587021626678201246L;
   private final ObjectMapper mapper = new ObjectMapper();
   private String serviceName;
   private String instanceId;


[servicecomb-pack] 41/42: SCB-1368 Update test cases timeout for CI

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit ddecff7b4accf10bbe66b813c84ef1b36c1118fd
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sun Sep 29 00:21:28 2019 +0800

    SCB-1368 Update test cases timeout for CI
---
 .../src/test/java/org/apache/servicecomb/pack/PackStepdefs.java         | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/acceptance-tests/acceptance-pack-akka-spring-demo/src/test/java/org/apache/servicecomb/pack/PackStepdefs.java b/acceptance-tests/acceptance-pack-akka-spring-demo/src/test/java/org/apache/servicecomb/pack/PackStepdefs.java
index c66e46f..19c1487 100644
--- a/acceptance-tests/acceptance-pack-akka-spring-demo/src/test/java/org/apache/servicecomb/pack/PackStepdefs.java
+++ b/acceptance-tests/acceptance-pack-akka-spring-demo/src/test/java/org/apache/servicecomb/pack/PackStepdefs.java
@@ -147,7 +147,7 @@ public class PackStepdefs implements En {
     List<Map<String, String>> expectedMaps = dataTable.asMaps(String.class, String.class);
     List<Map<String, String>> actualMaps = new ArrayList<>();
 
-    await().atMost(5, SECONDS).until(() -> {
+    await().atMost(10, SECONDS).until(() -> {
       actualMaps.clear();
       Collections.addAll(actualMaps, retrieveDataMaps(address, dataProcessor));
       // write the log if the Map size is not same


[servicecomb-pack] 03/42: SCB-1368 Allows the use of system variables -Dlog-file-name=xxxx.log to define log file name

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit bfe4e1187a2d29e4e4126817ff568ee78e6f98f8
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Thu Sep 5 14:54:00 2019 +0800

    SCB-1368 Allows the use of system variables -Dlog-file-name=xxxx.log to define log file name
---
 alpha/alpha-server/src/main/resources/log4j2.xml | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/alpha/alpha-server/src/main/resources/log4j2.xml b/alpha/alpha-server/src/main/resources/log4j2.xml
index 96a4f32..a5ec6c6 100644
--- a/alpha/alpha-server/src/main/resources/log4j2.xml
+++ b/alpha/alpha-server/src/main/resources/log4j2.xml
@@ -21,6 +21,7 @@
     <Property name="LOG_PATTERN">
       %d{yyyy-MM-dd HH:mm:ss.SSS} %5p ${hostName} --- [%15.15t] %-40.40c{1.} : %m%n%ex
     </Property>
+    <Property name="log-file-name">alpha-server</Property>
   </Properties>
   <Appenders>
     <Console name="Console" target="SYSTEM_OUT">
@@ -28,13 +29,13 @@
         <Pattern>${LOG_PATTERN}</Pattern>
       </PatternLayout>
     </Console>
-    <RollingFile name="FileAppender" fileName="logs/alpha-server.log"
-      filePattern="logs/alpha-server-%d{yyyy-MM-dd}-%i.log">
+    <RollingFile name="FileAppender" fileName="logs/${sys:log-file-name}.log"
+      filePattern="logs/${sys:log-file-name}-%d{yyyy-MM-dd}-%i.log">
       <PatternLayout>
         <Pattern>${LOG_PATTERN}</Pattern>
       </PatternLayout>
       <Policies>
-        <SizeBasedTriggeringPolicy size="10MB" />
+        <SizeBasedTriggeringPolicy size="100MB" />
       </Policies>
       <DefaultRolloverStrategy max="10"/>
     </RollingFile>


[servicecomb-pack] 42/42: SCB-1368 Added null protection logic

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit ed6fa9d4beb24c2dfab448f0be6dd53aed9caa56
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sun Sep 29 09:54:20 2019 +0800

    SCB-1368 Added null protection logic
---
 .../pack/alpha/server/fsm/FsmSagaDataController.java  | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/alpha/alpha-server/src/main/java/org/apache/servicecomb/pack/alpha/server/fsm/FsmSagaDataController.java b/alpha/alpha-server/src/main/java/org/apache/servicecomb/pack/alpha/server/fsm/FsmSagaDataController.java
index bd4d07d..0d51671 100644
--- a/alpha/alpha-server/src/main/java/org/apache/servicecomb/pack/alpha/server/fsm/FsmSagaDataController.java
+++ b/alpha/alpha-server/src/main/java/org/apache/servicecomb/pack/alpha/server/fsm/FsmSagaDataController.java
@@ -56,15 +56,16 @@ class FsmSagaDataController {
     LOG.info("Get the events request");
     List<Map> eventVos = new LinkedList<>();
     SagaData data = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
-    data.getEvents().forEach(event -> {
-      Map<String,String> obj = new HashMap();
-      obj.put("serviceName",event.getServiceName());
-      obj.put("type",event.getClass().getSimpleName());
-      eventVos.add(obj);
-    });
-    LOG.info("Get the event size {}",eventVos.size());
-    LOG.info("Get the event data {}",eventVos);
-
+    if (data != null && data.getEvents() != null) {
+      data.getEvents().forEach(event -> {
+        Map<String, String> obj = new HashMap();
+        obj.put("serviceName", event.getServiceName());
+        obj.put("type", event.getClass().getSimpleName());
+        eventVos.add(obj);
+      });
+      LOG.info("Get the event size {}", eventVos.size());
+      LOG.info("Get the event data {}", eventVos);
+    }
     return ResponseEntity.ok(eventVos);
   }
 


[servicecomb-pack] 30/42: SCB-1368 Optimize log information

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 142ba864860b6c52d90bb85b81701ad66f8318cf
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Fri Sep 27 11:54:24 2019 +0800

    SCB-1368 Optimize log information
---
 .../pack/alpha/fsm/spring/integration/akka/SagaDataExtension.java      | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/SagaDataExtension.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/SagaDataExtension.java
index 90fc25a..f8b50c7 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/SagaDataExtension.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/spring/integration/akka/SagaDataExtension.java
@@ -90,9 +90,6 @@ public class SagaDataExtension extends AbstractExtensionId<SagaDataExt> {
           .build();
       repositoryChannel.send(record);
       sagaDataMap.remove(globalTxId);
-      if(LOG.isDebugEnabled()){
-        LOG.info("send repository channel {}",globalTxId);
-      }
     }
 
     // Only for Test


[servicecomb-pack] 11/42: SCB-1368 Remove persistent queues for reliability

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 0373fa796d12682e488939220125ab182e6d4fa7
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Tue Sep 10 17:33:12 2019 +0800

    SCB-1368 Remove persistent queues for reliability
---
 ...va => DefaultTransactionRepositoryChannel.java} | 40 ++---------
 .../pack/alpha/fsm/sink/SagaActorEventSender.java  | 82 ----------------------
 .../servicecomb/pack/alpha/fsm/SagaActorTest.java  |  6 +-
 .../pack/alpha/fsm/SagaIntegrationTest.java        | 29 ++++----
 4 files changed, 20 insertions(+), 137 deletions(-)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/channel/MemoryTransactionRepositoryChannel.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/channel/DefaultTransactionRepositoryChannel.java
similarity index 51%
rename from alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/channel/MemoryTransactionRepositoryChannel.java
rename to alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/channel/DefaultTransactionRepositoryChannel.java
index ebea958..25e31d8 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/channel/MemoryTransactionRepositoryChannel.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/repository/channel/DefaultTransactionRepositoryChannel.java
@@ -17,55 +17,23 @@
 
 package org.apache.servicecomb.pack.alpha.fsm.repository.channel;
 
-import java.lang.invoke.MethodHandles;
-import java.util.concurrent.LinkedBlockingQueue;
+import org.apache.servicecomb.pack.alpha.core.fsm.repository.model.GlobalTransaction;
 import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
 import org.apache.servicecomb.pack.alpha.fsm.repository.AbstractTransactionRepositoryChannel;
-import org.apache.servicecomb.pack.alpha.core.fsm.repository.model.GlobalTransaction;
 import org.apache.servicecomb.pack.alpha.fsm.repository.TransactionRepository;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
-public class MemoryTransactionRepositoryChannel extends AbstractTransactionRepositoryChannel {
+public class DefaultTransactionRepositoryChannel extends AbstractTransactionRepositoryChannel {
 
-  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  private final LinkedBlockingQueue<GlobalTransaction> globalTransactionQueue;
-  private int size;
-
-  public MemoryTransactionRepositoryChannel(TransactionRepository repository, int size,
-      MetricsService metricsService) {
+  public DefaultTransactionRepositoryChannel(TransactionRepository repository, MetricsService metricsService) {
     super(repository, metricsService);
-    this.size = size > 0 ? size : Integer.MAX_VALUE;
-    globalTransactionQueue = new LinkedBlockingQueue(this.size);
-    new Thread(new GlobalTransactionConsumer(), "MemoryTransactionRepositoryChannel").start();
   }
 
   @Override
   public void sendTo(GlobalTransaction transaction) {
     try {
-      globalTransactionQueue.put(transaction);
+      repository.send(transaction);
     } catch (Exception e) {
       throw new RuntimeException(e);
     }
   }
-
-  class GlobalTransactionConsumer implements Runnable {
-
-    @Override
-    public void run() {
-      while (true) {
-        try {
-          GlobalTransaction transaction = globalTransactionQueue.peek();
-          if (transaction != null) {
-            repository.send(transaction);
-            globalTransactionQueue.poll();
-          } else {
-            Thread.sleep(10);
-          }
-        } catch (Exception ex) {
-          LOG.error(ex.getMessage(), ex);
-        }
-      }
-    }
-  }
 }
diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/sink/SagaActorEventSender.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/sink/SagaActorEventSender.java
deleted file mode 100644
index 567185b..0000000
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/sink/SagaActorEventSender.java
+++ /dev/null
@@ -1,82 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.servicecomb.pack.alpha.fsm.sink;
-
-import akka.actor.ActorRef;
-import akka.actor.ActorSelection;
-import akka.actor.ActorSystem;
-import akka.util.Timeout;
-import java.lang.invoke.MethodHandles;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.servicecomb.pack.alpha.core.fsm.sink.ActorEventSink;
-import org.apache.servicecomb.pack.alpha.fsm.SagaActor;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.SagaStartedEvent;
-import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
-import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.springframework.beans.factory.annotation.Autowired;
-import scala.concurrent.Await;
-import scala.concurrent.Future;
-import scala.concurrent.duration.Duration;
-
-public class SagaActorEventSender implements ActorEventSink {
-
-  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  private final MetricsService metricsService;
-
-  @Autowired
-  ActorSystem system;
-
-  public SagaActorEventSender(
-      MetricsService metricsService) {
-    this.metricsService = metricsService;
-  }
-
-  private static final Timeout lookupTimeout = new Timeout(Duration.create(1, TimeUnit.SECONDS));
-
-  public void send(BaseEvent event) {
-    long begin = System.currentTimeMillis();
-    metricsService.metrics().doActorReceived();
-    try{
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("send {} ", event.toString());
-      }
-      if (event instanceof SagaStartedEvent) {
-        final ActorRef saga = system
-            .actorOf(SagaActor.props(event.getGlobalTxId()), event.getGlobalTxId());
-        saga.tell(event, ActorRef.noSender());
-      } else {
-        ActorSelection actorSelection = system
-            .actorSelection("/user/" + event.getGlobalTxId());
-        //TODO We should leverage the async API that actor provides to send out the message
-        final Future<ActorRef> actorRefFuture = actorSelection.resolveOne(lookupTimeout);
-        final ActorRef saga = Await.result(actorRefFuture, lookupTimeout.duration());
-        saga.tell(event, ActorRef.noSender());
-      }
-      metricsService.metrics().doActorAccepted();
-      long end = System.currentTimeMillis();
-      metricsService.metrics().doActorAvgTime(end - begin);
-    }catch (Exception ex){
-      metricsService.metrics().doActorRejected();
-      throw new RuntimeException(ex);
-    }
-  }
-}
diff --git a/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaActorTest.java b/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaActorTest.java
index fe00de2..e32fbfe 100644
--- a/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaActorTest.java
+++ b/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaActorTest.java
@@ -33,13 +33,12 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.UUID;
-import org.apache.servicecomb.pack.alpha.fsm.SagaActorState;
 import org.apache.servicecomb.pack.alpha.core.fsm.TxState;
 import org.apache.servicecomb.pack.alpha.core.fsm.event.base.BaseEvent;
 import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
 import org.apache.servicecomb.pack.alpha.fsm.model.SagaData;
 import org.apache.servicecomb.pack.alpha.fsm.repository.TransactionRepositoryChannel;
-import org.apache.servicecomb.pack.alpha.fsm.repository.channel.MemoryTransactionRepositoryChannel;
+import org.apache.servicecomb.pack.alpha.fsm.repository.channel.DefaultTransactionRepositoryChannel;
 import org.apache.servicecomb.pack.alpha.fsm.repository.elasticsearch.ElasticsearchTransactionRepository;
 import org.apache.servicecomb.pack.alpha.fsm.repository.TransactionRepository;
 import org.apache.servicecomb.pack.alpha.fsm.spring.integration.akka.SagaDataExtension;
@@ -101,8 +100,7 @@ public class SagaActorTest {
   @Before
   public void before(){
     TransactionRepository repository = new ElasticsearchTransactionRepository(template, metricsService, 0,0);
-    TransactionRepositoryChannel repositoryChannel = new MemoryTransactionRepositoryChannel(repository, -1,
-        metricsService);
+    TransactionRepositoryChannel repositoryChannel = new DefaultTransactionRepositoryChannel(repository, metricsService);
     SAGA_DATA_EXTENSION_PROVIDER.get(system).setRepositoryChannel(repositoryChannel);
   }
 
diff --git a/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java b/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java
index 2acb135..03a89b0 100644
--- a/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java
+++ b/alpha/alpha-fsm/src/test/java/org/apache/servicecomb/pack/alpha/fsm/SagaIntegrationTest.java
@@ -24,11 +24,10 @@ import static org.junit.Assert.assertNotNull;
 
 import akka.actor.ActorSystem;
 import java.util.UUID;
-import org.apache.servicecomb.pack.alpha.fsm.SagaActorState;
 import org.apache.servicecomb.pack.alpha.core.fsm.TxState;
+import org.apache.servicecomb.pack.alpha.fsm.channel.memory.MemoryActorEventChannel;
 import org.apache.servicecomb.pack.alpha.fsm.metrics.MetricsService;
 import org.apache.servicecomb.pack.alpha.fsm.model.SagaData;
-import org.apache.servicecomb.pack.alpha.fsm.sink.SagaActorEventSender;
 import org.apache.servicecomb.pack.alpha.fsm.spring.integration.akka.SagaDataExtension;
 import org.junit.After;
 import org.junit.Test;
@@ -61,7 +60,7 @@ public class SagaIntegrationTest {
   ActorSystem system;
   
   @Autowired
-  SagaActorEventSender sagaActorEventSender;
+  MemoryActorEventChannel memoryActorEventChannel;
 
   @Autowired
   MetricsService metricsService;
@@ -81,7 +80,7 @@ public class SagaIntegrationTest {
     final String localTxId_2 = UUID.randomUUID().toString();
     final String localTxId_3 = UUID.randomUUID().toString();
     SagaEventSender.successfulEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
     await().atMost(2, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
@@ -105,7 +104,7 @@ public class SagaIntegrationTest {
     final String globalTxId = UUID.randomUUID().toString();
     final String localTxId_1 = UUID.randomUUID().toString();
     SagaEventSender.firstTxAbortedEvents(globalTxId, localTxId_1).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
 
     await().atMost(2, SECONDS).until(() -> {
@@ -125,7 +124,7 @@ public class SagaIntegrationTest {
     final String localTxId_1 = UUID.randomUUID().toString();
     final String localTxId_2 = UUID.randomUUID().toString();
     SagaEventSender.middleTxAbortedEvents(globalTxId, localTxId_1, localTxId_2).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
     await().atMost(2, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
@@ -146,7 +145,7 @@ public class SagaIntegrationTest {
     final String localTxId_2 = UUID.randomUUID().toString();
     final String localTxId_3 = UUID.randomUUID().toString();
     SagaEventSender.lastTxAbortedEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
     await().atMost(2, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
@@ -168,7 +167,7 @@ public class SagaIntegrationTest {
     final String localTxId_2 = UUID.randomUUID().toString();
     final String localTxId_3 = UUID.randomUUID().toString();
     SagaEventSender.sagaAbortedEventBeforeTxComponsitedEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
     await().atMost(2, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
@@ -190,7 +189,7 @@ public class SagaIntegrationTest {
     final String localTxId_2 = UUID.randomUUID().toString();
     final String localTxId_3 = UUID.randomUUID().toString();
     SagaEventSender.receivedRemainingEventAfterFirstTxAbortedEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
     await().atMost(2, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
@@ -212,7 +211,7 @@ public class SagaIntegrationTest {
     final String localTxId_2 = UUID.randomUUID().toString();
     final String localTxId_3 = UUID.randomUUID().toString();
     SagaEventSender.sagaAbortedEventAfterAllTxEndedsEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
     await().atMost(2, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
@@ -234,7 +233,7 @@ public class SagaIntegrationTest {
     final String localTxId_2 = UUID.randomUUID().toString();
     final String localTxId_3 = UUID.randomUUID().toString();
     SagaEventSender.omegaSendSagaTimeoutEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
     await().atMost(2, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
@@ -257,7 +256,7 @@ public class SagaIntegrationTest {
     final String localTxId_3 = UUID.randomUUID().toString();
     final int timeout = 5; // second
     SagaEventSender.sagaActorTriggerTimeoutEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3, timeout).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
     await().atMost(timeout + 2, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
@@ -279,7 +278,7 @@ public class SagaIntegrationTest {
     final String localTxId_2 = UUID.randomUUID().toString();
     final String localTxId_3 = UUID.randomUUID().toString();
     SagaEventSender.successfulWithTxConcurrentEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
     await().atMost(2, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
@@ -301,7 +300,7 @@ public class SagaIntegrationTest {
     final String localTxId_2 = UUID.randomUUID().toString();
     final String localTxId_3 = UUID.randomUUID().toString();
     SagaEventSender.successfulWithTxConcurrentCrossEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
     await().atMost(2, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();
@@ -323,7 +322,7 @@ public class SagaIntegrationTest {
     final String localTxId_2 = UUID.randomUUID().toString();
     final String localTxId_3 = UUID.randomUUID().toString();
     SagaEventSender.lastTxAbortedEventWithTxConcurrentEvents(globalTxId, localTxId_1, localTxId_2, localTxId_3).stream().forEach( event -> {
-      sagaActorEventSender.send(event);
+      memoryActorEventChannel.send(event);
     });
     await().atMost(2, SECONDS).until(() -> {
       SagaData sagaData = SagaDataExtension.SAGA_DATA_EXTENSION_PROVIDER.get(system).getLastSagaData();


[servicecomb-pack] 25/42: SCB-1368 Added the globalTxId prefix for concurrent

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit eff495dfc3cb8e7bf80af17c3772ecd0bcf4032e
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Thu Sep 26 17:52:53 2019 +0800

    SCB-1368 Added the globalTxId prefix for concurrent
---
 .../pack/alpha/benchmark/SagaEventBenchmark.java     | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/alpha/alpha-benchmark/src/main/java/org/apache/servicecomb/pack/alpha/benchmark/SagaEventBenchmark.java b/alpha/alpha-benchmark/src/main/java/org/apache/servicecomb/pack/alpha/benchmark/SagaEventBenchmark.java
index 322aa54..e73ad9d 100644
--- a/alpha/alpha-benchmark/src/main/java/org/apache/servicecomb/pack/alpha/benchmark/SagaEventBenchmark.java
+++ b/alpha/alpha-benchmark/src/main/java/org/apache/servicecomb/pack/alpha/benchmark/SagaEventBenchmark.java
@@ -21,6 +21,7 @@ import java.lang.invoke.MethodHandles;
 import java.util.ArrayList;
 import java.util.List;
 import java.util.OptionalDouble;
+import java.util.Random;
 import java.util.UUID;
 import java.util.concurrent.CountDownLatch;
 
@@ -62,9 +63,9 @@ public class SagaEventBenchmark {
     CountDownLatch begin = new CountDownLatch(1);
     CountDownLatch end = new CountDownLatch(concurrency);
     begin.countDown();
+    String[] id_prefixs = generateRandomIdPrefix(concurrency);
     for (int i = 0; i < concurrency; i++) {
-      String id_prefix = "";
-      Execute execute = new Execute(sender, id_prefix,requests / concurrency, begin, end);
+      Execute execute = new Execute(sender, id_prefixs[i],requests / concurrency, begin, end);
       new Thread(execute).start();
     }
     try {
@@ -155,7 +156,7 @@ public class SagaEventBenchmark {
         for (int i = 0; i < requests; i++) {
           metrics.completeRequestsIncrement();
           long s = System.currentTimeMillis();
-          final String globalTxId = UUID.randomUUID().toString();
+          final String globalTxId = id_prefix + "-" + i;
           final String localTxId_1 = UUID.randomUUID().toString();
           final String localTxId_2 = UUID.randomUUID().toString();
           final String localTxId_3 = UUID.randomUUID().toString();
@@ -209,4 +210,17 @@ public class SagaEventBenchmark {
         new TxEvent(EventType.SagaEndedEvent, globalTxId, globalTxId, globalTxId, "", 0, null, 0));
     return sagaEvents;
   }
+
+  private String[] generateRandomIdPrefix(int numberOfWords) {
+    String[] randomStrings = new String[numberOfWords];
+    Random random = new Random();
+    for (int i = 0; i < numberOfWords; i++) {
+      char[] word = new char[8];
+      for (int j = 0; j < word.length; j++) {
+        word[j] = (char) ('a' + random.nextInt(26));
+      }
+      randomStrings[i] = new String(word);
+    }
+    return randomStrings;
+  }
 }


[servicecomb-pack] 07/42: SCB-1368 Add Akka cluster dependencies

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 4c2b1e4d7c1ee9d7dec1822b59e878f8719761fd
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Tue Sep 10 17:23:17 2019 +0800

    SCB-1368 Add Akka cluster dependencies
---
 alpha/alpha-fsm/pom.xml    | 52 +++++++++++++++++++++-------------------------
 alpha/alpha-server/pom.xml |  5 +++++
 pom.xml                    | 38 +++++++++++++++++++++++++++++++++
 3 files changed, 67 insertions(+), 28 deletions(-)

diff --git a/alpha/alpha-fsm/pom.xml b/alpha/alpha-fsm/pom.xml
index b51bec9..48bcada 100644
--- a/alpha/alpha-fsm/pom.xml
+++ b/alpha/alpha-fsm/pom.xml
@@ -43,21 +43,6 @@
         <type>pom</type>
         <scope>import</scope>
       </dependency>
-      <dependency>
-        <groupId>com.typesafe.akka</groupId>
-        <artifactId>akka-persistence_2.12</artifactId>
-        <version>${akka.version}</version>
-      </dependency>
-      <dependency>
-        <groupId>com.typesafe.akka</groupId>
-        <artifactId>akka-cluster_2.12</artifactId>
-        <version>${akka.version}</version>
-      </dependency>
-      <dependency>
-        <groupId>com.typesafe.akka</groupId>
-        <artifactId>akka-cluster-metrics_2.12</artifactId>
-        <version>${akka.version}</version>
-      </dependency>
     </dependencies>
   </dependencyManagement>
 
@@ -78,16 +63,16 @@
       <artifactId>alpha-core</artifactId>
     </dependency>
     <dependency>
-      <groupId>org.apache.servicecomb.pack</groupId>
-      <artifactId>alpha-fsm-channel-redis</artifactId>
+      <groupId>org.springframework.boot</groupId>
+      <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
     </dependency>
     <dependency>
-      <groupId>org.apache.servicecomb.pack</groupId>
-      <artifactId>alpha-fsm-channel-kafka</artifactId>
+      <groupId>org.springframework.kafka</groupId>
+      <artifactId>spring-kafka</artifactId>
     </dependency>
     <dependency>
       <groupId>org.springframework.boot</groupId>
-      <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
+      <artifactId>spring-boot-starter-data-redis</artifactId>
     </dependency>
     <dependency>
       <groupId>com.google.guava</groupId>
@@ -124,31 +109,42 @@
       <artifactId>akka-actor_2.12</artifactId>
     </dependency>
     <dependency>
-      <groupId>com.typesafe.akka</groupId>
-      <artifactId>akka-persistence_2.12</artifactId>
-    </dependency>
-    <dependency>
       <groupId>org.fusesource.leveldbjni</groupId>
       <artifactId>leveldbjni-all</artifactId>
-      <version>${leveldbjni-all.version}</version>
     </dependency>
     <dependency>
       <groupId>com.safety-data</groupId>
       <artifactId>akka-persistence-redis_2.12</artifactId>
-      <version>${akka-persistence-redis.version}</version>
     </dependency>
     <dependency>
       <groupId>com.typesafe.akka</groupId>
-      <artifactId>akka-cluster_2.12</artifactId>
+      <artifactId>akka-cluster-metrics_2.12</artifactId>
     </dependency>
     <dependency>
       <groupId>com.typesafe.akka</groupId>
-      <artifactId>akka-cluster-metrics_2.12</artifactId>
+      <artifactId>akka-stream-kafka_2.12</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>com.typesafe.akka</groupId>
+      <artifactId>akka-cluster-sharding-typed_2.12</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>com.typesafe.akka</groupId>
+      <artifactId>akka-cluster-typed_2.12</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>com.typesafe.akka</groupId>
+      <artifactId>akka-persistence-typed_2.12</artifactId>
     </dependency>
     <dependency>
       <groupId>com.typesafe.akka</groupId>
       <artifactId>akka-slf4j_2.12</artifactId>
     </dependency>
+    <dependency>
+      <groupId>org.apache.kafka</groupId>
+      <artifactId>kafka-clients</artifactId>
+      <version>2.1.1</version>
+    </dependency>
 
     <!--
       jmx over http
diff --git a/alpha/alpha-server/pom.xml b/alpha/alpha-server/pom.xml
index f53e09c..d6be3b7 100644
--- a/alpha/alpha-server/pom.xml
+++ b/alpha/alpha-server/pom.xml
@@ -201,6 +201,11 @@
       <groupId>org.hamcrest</groupId>
       <artifactId>hamcrest-all</artifactId>
     </dependency>
+    <dependency>
+      <groupId>org.apache.kafka</groupId>
+      <artifactId>kafka-clients</artifactId>
+      <version>2.1.1</version>
+    </dependency>
   </dependencies>
 
   <build>
diff --git a/pom.xml b/pom.xml
index a7734c3..d41d63f 100644
--- a/pom.xml
+++ b/pom.xml
@@ -58,6 +58,9 @@
 
     <java.chassis.version>1.2.1</java.chassis.version>
     <akka.version>2.5.14</akka.version>
+    <alpakka.version>1.0.5</alpakka.version>
+    <leveldbjni-all.version>1.8</leveldbjni-all.version>
+    <akka-persistence-redis.version>0.4.0</akka-persistence-redis.version>
     <rat.version>0.12</rat.version>
     <maven.failsafe.version>2.19.1</maven.failsafe.version>
     <grpc.version>1.14.0</grpc.version>
@@ -432,6 +435,41 @@
       </dependency>
       <dependency>
         <groupId>com.typesafe.akka</groupId>
+        <artifactId>akka-persistence-typed_2.12</artifactId>
+        <version>${akka.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>com.typesafe.akka</groupId>
+        <artifactId>akka-cluster-typed_2.12</artifactId>
+        <version>${akka.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>com.typesafe.akka</groupId>
+        <artifactId>akka-cluster-sharding-typed_2.12</artifactId>
+        <version>${akka.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>com.typesafe.akka</groupId>
+        <artifactId>akka-stream-kafka_2.12</artifactId>
+        <version>${alpakka.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>com.typesafe.akka</groupId>
+        <artifactId>akka-cluster-metrics_2.12</artifactId>
+        <version>${akka.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>com.safety-data</groupId>
+        <artifactId>akka-persistence-redis_2.12</artifactId>
+        <version>${akka-persistence-redis.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>org.fusesource.leveldbjni</groupId>
+        <artifactId>leveldbjni-all</artifactId>
+        <version>${leveldbjni-all.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>com.typesafe.akka</groupId>
         <artifactId>akka-slf4j_2.12</artifactId>
         <version>${akka.version}</version>
       </dependency>


[servicecomb-pack] 40/42: SCB-1368 Fix metric statistics bug

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 80fb4b64384afe90729ad56d4874364bb2c0ff01
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 28 19:28:07 2019 +0800

    SCB-1368 Fix metric statistics bug
---
 .../org/apache/servicecomb/pack/alpha/core/metrics/MetricsBean.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBean.java b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBean.java
index 6146605..281292e 100644
--- a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBean.java
+++ b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/metrics/MetricsBean.java
@@ -185,8 +185,8 @@ public class MetricsBean {
     return repositoryAccepted.get();
   }
 
-  public AtomicLong getRepositoryRejected() {
-    return repositoryRejected;
+  public long getRepositoryRejected() {
+    return repositoryRejected.get();
   }
 
   public double getRepositoryAvgTime() {


[servicecomb-pack] 39/42: SCB-1368 disable JMX over HTTP

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit 828908e3e6932131339bf30de051023407756107
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 28 19:20:36 2019 +0800

    SCB-1368 disable JMX over HTTP
---
 alpha/alpha-fsm/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/alpha/alpha-fsm/pom.xml b/alpha/alpha-fsm/pom.xml
index 8f53916..2772530 100644
--- a/alpha/alpha-fsm/pom.xml
+++ b/alpha/alpha-fsm/pom.xml
@@ -144,11 +144,11 @@
     <!--
       jmx over http
       http://0.0.0.0:8090/actuator/jolokia/read/akka:type=Cluster
-    -->
     <dependency>
       <groupId>org.jolokia</groupId>
       <artifactId>jolokia-core</artifactId>
     </dependency>
+    -->
     <!-- For testing the artifacts scope are test-->
     <dependency>
       <groupId>org.springframework.boot</groupId>


[servicecomb-pack] 36/42: SCB-1368 Delete useless code

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit af32706201a25debbbd403ff9bbebbb288c5608c
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Sat Sep 28 17:44:38 2019 +0800

    SCB-1368 Delete useless code
---
 .../servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java    | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java
index 18ff36a..32a723e 100644
--- a/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java
+++ b/alpha/alpha-core/src/main/java/org/apache/servicecomb/pack/alpha/core/fsm/event/base/BaseEvent.java
@@ -118,14 +118,7 @@ public abstract class BaseEvent implements Serializable {
     try {
       return mapper.writeValueAsString(this);
     } catch (JsonProcessingException e) {
-    return this.getClass().getSimpleName()+"{" +
-        "serviceName='" + serviceName + '\'' +
-        ", instanceId='" + instanceId + '\'' +
-        ", globalTxId='" + globalTxId + '\'' +
-        ", parentTxId='" + parentTxId + '\'' +
-        ", localTxId='" + localTxId + '\'' +
-        ", createTime=" + createTime +
-        '}';
+      throw new RuntimeException(e);
     }
   }
 


[servicecomb-pack] 27/42: SCB-1368 Added parameter description

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ningjiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/servicecomb-pack.git

commit b5bb417ef05fcab4680bc09fa2d80f2af063825c
Author: Lei Zhang <co...@gmail.com>
AuthorDate: Fri Sep 27 10:54:48 2019 +0800

    SCB-1368 Added parameter description
---
 .../org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java    | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java
index 97430ba..2afc1a9 100644
--- a/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java
+++ b/alpha/alpha-fsm/src/main/java/org/apache/servicecomb/pack/alpha/fsm/FsmAutoConfiguration.java
@@ -55,6 +55,9 @@ import org.springframework.data.elasticsearch.core.ElasticsearchTemplate;
 @ConditionalOnProperty(value = {"alpha.feature.akka.enabled"})
 public class FsmAutoConfiguration {
 
+  // TODO
+  //  Size of bulk request, When this value is greater than 0, the batch data will be lost when the jvm crashes.
+  //  In the future, we can use Kafka to solve this problem instead of storing it directly in the ES.
   @Value("${alpha.feature.akka.transaction.repository.elasticsearch.batchSize:100}")
   int repositoryElasticsearchBatchSize;