You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@rocketmq.apache.org by GitBox <gi...@apache.org> on 2021/06/05 09:13:28 UTC

[GitHub] [rocketmq] dragon-zhang opened a new pull request #2983: RIP 22 RocketMQ Stage Message Consumer Part

dragon-zhang opened a new pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983


   RIP 22 here: https://github.com/apache/rocketmq/issues/2937


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter edited a comment on pull request #2983: RIP 22 RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479


   # [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#2983](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3282132) into [develop](https://codecov.io/gh/apache/rocketmq/commit/5e99cdbeb7d56a46059ca83923a12b9a0d80cece?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (5e99cdb) will **increase** coverage by `0.25%`.
   > The diff coverage is `29.92%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/rocketmq/pull/2983/graphs/tree.svg?width=650&height=150&src=pr&token=4w0sxP1wZv&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   
   ```diff
   @@              Coverage Diff              @@
   ##             develop    #2983      +/-   ##
   =============================================
   + Coverage      47.98%   48.24%   +0.25%     
   + Complexity      4567     3697     -870     
   =============================================
     Files            552      319     -233     
     Lines          36628    30182    -6446     
     Branches        4844     4323     -521     
   =============================================
   - Hits           17577    14560    -3017     
   + Misses         16831    13607    -3224     
   + Partials        2220     2015     -205     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...ocketmq/broker/processor/AdminBrokerProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0FkbWluQnJva2VyUHJvY2Vzc29yLmphdmE=) | `7.93% <0.00%> (-0.03%)` | :arrow_down: |
   | [...etmq/broker/processor/ConsumerManageProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0NvbnN1bWVyTWFuYWdlUHJvY2Vzc29yLmphdmE=) | `4.25% <0.00%> (-1.63%)` | :arrow_down: |
   | [...ocketmq/client/consumer/DefaultMQPushConsumer.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvRGVmYXVsdE1RUHVzaENvbnN1bWVyLmphdmE=) | `53.73% <0.00%> (-0.82%)` | :arrow_down: |
   | [...er/listener/MessageListenerStagedConcurrently.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvbGlzdGVuZXIvTWVzc2FnZUxpc3RlbmVyU3RhZ2VkQ29uY3VycmVudGx5LmphdmE=) | `0.00% <0.00%> (ø)` | |
   | [...g/apache/rocketmq/client/impl/MQClientAPIImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9NUUNsaWVudEFQSUltcGwuamF2YQ==) | `11.97% <0.00%> (-0.22%)` | :arrow_down: |
   | [...he/rocketmq/client/impl/consumer/ProcessQueue.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Qcm9jZXNzUXVldWUuamF2YQ==) | `54.97% <0.00%> (-5.03%)` | :arrow_down: |
   | [...cketmq/client/impl/consumer/RebalancePushImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9SZWJhbGFuY2VQdXNoSW1wbC5qYXZh) | `34.23% <0.00%> (-1.28%)` | :arrow_down: |
   | [...lient/impl/consumer/DefaultMQPushConsumerImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9EZWZhdWx0TVFQdXNoQ29uc3VtZXJJbXBsLmphdmE=) | `39.41% <11.76%> (-0.76%)` | :arrow_down: |
   | [...etmq/broker/offset/ConsumerStageOffsetManager.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvb2Zmc2V0L0NvbnN1bWVyU3RhZ2VPZmZzZXRNYW5hZ2VyLmphdmE=) | `14.81% <14.81%> (ø)` | |
   | [...sumer/ConsumeMessageStagedConcurrentlyService.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Db25zdW1lTWVzc2FnZVN0YWdlZENvbmN1cnJlbnRseVNlcnZpY2UuamF2YQ==) | `20.32% <20.32%> (ø)` | |
   | ... and [264 more](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [5e99cdb...3282132](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40436982/badge)](https://coveralls.io/builds/40436982)
   
   Coverage decreased (-0.8%) to 53.216% when pulling **cb5d4de41d6ca821ded474da6f544664e78d5c5f on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **52348b862c0dda897764c3b51fe1436c1a5ae0fe on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40481714/badge)](https://coveralls.io/builds/40481714)
   
   Coverage decreased (-0.9%) to 53.292% when pulling **a17ddef804beab8fecba1faceb0e60423a26a2d3 on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **93974b0dd542c9f478aa8da65909b86ae7de0ca6 on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40594327/badge)](https://coveralls.io/builds/40594327)
   
   Coverage decreased (-1.0%) to 53.191% when pulling **6c1259d83a1a0f6e4bbfb385bdc8ecf8557e7b05 on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **df1d93fc8859377b92ba87c6947911281656f355 on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40475067/badge)](https://coveralls.io/builds/40475067)
   
   Coverage decreased (-1.04%) to 53.154% when pulling **af5ec6f661de2c16bccc667e1995c8c52275eb14 on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **57c166bc71cfbe4de4a74b80ea0a380d48f6a229 on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] ifplusor commented on a change in pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
ifplusor commented on a change in pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#discussion_r730374325



##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);

Review comment:
       `stageOffset` is read from `stageOffsetStore` by `messageQueue`. For same `topic` but different `messageQueue`s, can not read correct `stageOffset`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40437215/badge)](https://coveralls.io/builds/40437215)
   
   Coverage decreased (-0.7%) to 53.311% when pulling **cb5d4de41d6ca821ded474da6f544664e78d5c5f on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **52348b862c0dda897764c3b51fe1436c1a5ae0fe on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] ifplusor commented on a change in pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
ifplusor commented on a change in pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#discussion_r730374895



##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -179,15 +179,20 @@ public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String top
         if (null == groupByStrategy) {
             ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
                 new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
-            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            currentStageOffsetMap.put(topic, stageOffset);
             groupByStrategy = currentStageOffsetMap.get(topic);
         }
-        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.get(strategyId);
         if (null == groups) {
+            groupByStrategy.put(strategyId, new ConcurrentHashMap<>());

Review comment:
       `putIfAbsent` is correct under concurrent.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls commented on pull request #2983: RIP 22 RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls commented on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40330339/badge)](https://coveralls.io/builds/40330339)
   
   Coverage decreased (-0.6%) to 53.438% when pulling **b152b11f9c1403f56695ed87200acc487b5b724f on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **a1babab507934e81f0e05b2867566c8b459be341 on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter edited a comment on pull request #2983: RIP 22 RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479


   # [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#2983](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b152b11) into [develop](https://codecov.io/gh/apache/rocketmq/commit/5e99cdbeb7d56a46059ca83923a12b9a0d80cece?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (5e99cdb) will **increase** coverage by `0.31%`.
   > The diff coverage is `30.14%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/rocketmq/pull/2983/graphs/tree.svg?width=650&height=150&src=pr&token=4w0sxP1wZv&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   
   ```diff
   @@              Coverage Diff              @@
   ##             develop    #2983      +/-   ##
   =============================================
   + Coverage      47.98%   48.30%   +0.31%     
   + Complexity      4567     3701     -866     
   =============================================
     Files            552      319     -233     
     Lines          36628    30183    -6445     
     Branches        4844     4323     -521     
   =============================================
   - Hits           17577    14580    -2997     
   + Misses         16831    13584    -3247     
   + Partials        2220     2019     -201     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...ocketmq/broker/processor/AdminBrokerProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0FkbWluQnJva2VyUHJvY2Vzc29yLmphdmE=) | `7.93% <0.00%> (-0.03%)` | :arrow_down: |
   | [...etmq/broker/processor/ConsumerManageProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0NvbnN1bWVyTWFuYWdlUHJvY2Vzc29yLmphdmE=) | `4.25% <0.00%> (-1.63%)` | :arrow_down: |
   | [...ocketmq/client/consumer/DefaultMQPushConsumer.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvRGVmYXVsdE1RUHVzaENvbnN1bWVyLmphdmE=) | `53.73% <0.00%> (-0.82%)` | :arrow_down: |
   | [...g/apache/rocketmq/client/impl/MQClientAPIImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9NUUNsaWVudEFQSUltcGwuamF2YQ==) | `11.97% <0.00%> (-0.22%)` | :arrow_down: |
   | [...he/rocketmq/client/impl/consumer/ProcessQueue.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Qcm9jZXNzUXVldWUuamF2YQ==) | `55.41% <0.00%> (-4.59%)` | :arrow_down: |
   | [...cketmq/client/impl/consumer/RebalancePushImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9SZWJhbGFuY2VQdXNoSW1wbC5qYXZh) | `34.23% <0.00%> (-1.28%)` | :arrow_down: |
   | [...lient/impl/consumer/DefaultMQPushConsumerImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9EZWZhdWx0TVFQdXNoQ29uc3VtZXJJbXBsLmphdmE=) | `39.41% <11.76%> (-0.76%)` | :arrow_down: |
   | [...etmq/broker/offset/ConsumerStageOffsetManager.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvb2Zmc2V0L0NvbnN1bWVyU3RhZ2VPZmZzZXRNYW5hZ2VyLmphdmE=) | `14.81% <14.81%> (ø)` | |
   | [...sumer/ConsumeMessageStagedConcurrentlyService.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Db25zdW1lTWVzc2FnZVN0YWdlZENvbmN1cnJlbnRseVNlcnZpY2UuamF2YQ==) | `20.58% <20.58%> (ø)` | |
   | [...a/org/apache/rocketmq/broker/BrokerController.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvQnJva2VyQ29udHJvbGxlci5qYXZh) | `44.83% <41.66%> (-0.07%)` | :arrow_down: |
   | ... and [264 more](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [5e99cdb...b152b11](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter commented on pull request #2983: RIP 22 RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter commented on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479


   # [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#2983](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b152b11) into [develop](https://codecov.io/gh/apache/rocketmq/commit/5e99cdbeb7d56a46059ca83923a12b9a0d80cece?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (5e99cdb) will **increase** coverage by `0.31%`.
   > The diff coverage is `30.14%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/rocketmq/pull/2983/graphs/tree.svg?width=650&height=150&src=pr&token=4w0sxP1wZv&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   
   ```diff
   @@              Coverage Diff              @@
   ##             develop    #2983      +/-   ##
   =============================================
   + Coverage      47.98%   48.30%   +0.31%     
   + Complexity      4567     3701     -866     
   =============================================
     Files            552      319     -233     
     Lines          36628    30183    -6445     
     Branches        4844     4323     -521     
   =============================================
   - Hits           17577    14580    -2997     
   + Misses         16831    13584    -3247     
   + Partials        2220     2019     -201     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...ocketmq/broker/processor/AdminBrokerProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0FkbWluQnJva2VyUHJvY2Vzc29yLmphdmE=) | `7.93% <0.00%> (-0.03%)` | :arrow_down: |
   | [...etmq/broker/processor/ConsumerManageProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0NvbnN1bWVyTWFuYWdlUHJvY2Vzc29yLmphdmE=) | `4.25% <0.00%> (-1.63%)` | :arrow_down: |
   | [...ocketmq/client/consumer/DefaultMQPushConsumer.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvRGVmYXVsdE1RUHVzaENvbnN1bWVyLmphdmE=) | `53.73% <0.00%> (-0.82%)` | :arrow_down: |
   | [...g/apache/rocketmq/client/impl/MQClientAPIImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9NUUNsaWVudEFQSUltcGwuamF2YQ==) | `11.97% <0.00%> (-0.22%)` | :arrow_down: |
   | [...he/rocketmq/client/impl/consumer/ProcessQueue.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Qcm9jZXNzUXVldWUuamF2YQ==) | `55.41% <0.00%> (-4.59%)` | :arrow_down: |
   | [...cketmq/client/impl/consumer/RebalancePushImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9SZWJhbGFuY2VQdXNoSW1wbC5qYXZh) | `34.23% <0.00%> (-1.28%)` | :arrow_down: |
   | [...lient/impl/consumer/DefaultMQPushConsumerImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9EZWZhdWx0TVFQdXNoQ29uc3VtZXJJbXBsLmphdmE=) | `39.41% <11.76%> (-0.76%)` | :arrow_down: |
   | [...etmq/broker/offset/ConsumerStageOffsetManager.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvb2Zmc2V0L0NvbnN1bWVyU3RhZ2VPZmZzZXRNYW5hZ2VyLmphdmE=) | `14.81% <14.81%> (ø)` | |
   | [...sumer/ConsumeMessageStagedConcurrentlyService.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Db25zdW1lTWVzc2FnZVN0YWdlZENvbmN1cnJlbnRseVNlcnZpY2UuamF2YQ==) | `20.58% <20.58%> (ø)` | |
   | [...a/org/apache/rocketmq/broker/BrokerController.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvQnJva2VyQ29udHJvbGxlci5qYXZh) | `44.83% <41.66%> (-0.07%)` | :arrow_down: |
   | ... and [264 more](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [5e99cdb...b152b11](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] ifplusor commented on a change in pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
ifplusor commented on a change in pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#discussion_r729671912



##########
File path: common/src/main/java/org/apache/rocketmq/common/UtilAll.java
##########
@@ -29,16 +29,20 @@
 import java.text.NumberFormat;
 import java.text.ParseException;
 import java.text.SimpleDateFormat;
+import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Calendar;
 import java.util.Date;
 import java.util.Enumeration;
 import java.util.Iterator;
+import java.util.LinkedHashMap;

Review comment:
       unused import

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);

Review comment:
       Why associate stage offset about **mq** with **topic**?

##########
File path: common/src/main/java/org/apache/rocketmq/common/UtilAll.java
##########
@@ -29,16 +29,20 @@
 import java.text.NumberFormat;
 import java.text.ParseException;
 import java.text.SimpleDateFormat;
+import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Calendar;
 import java.util.Date;
 import java.util.Enumeration;
 import java.util.Iterator;
+import java.util.LinkedHashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.zip.CRC32;
 import java.util.zip.DeflaterOutputStream;
 import java.util.zip.InflaterInputStream;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;

Review comment:
       ditto

##########
File path: common/src/main/java/org/apache/rocketmq/common/concurrent/ConcurrentEngine.java
##########
@@ -0,0 +1,463 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.common.concurrent;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Supplier;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.constant.LoggerName;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.logging.InternalLoggerFactory;
+
+public class ConcurrentEngine {
+
+    protected static final InternalLogger log = InternalLoggerFactory.getLogger(LoggerName.COMMON_LOGGER_NAME);
+
+    protected final ExecutorService enginePool;
+
+    public ConcurrentEngine() {
+        this.enginePool = ForkJoinPool.commonPool();
+    }
+
+    public ConcurrentEngine(ExecutorService enginePool) {
+        this.enginePool = enginePool;
+    }
+
+    public final void runAsync(Runnable... tasks) {
+        runAsync(UtilAll.newArrayList(tasks));
+    }
+
+    protected static <E> List<E> pollAllTask(Queue<E> tasks) {
+        //avoid list expansion
+        List<E> list = new LinkedList<>();
+        while (tasks != null && !tasks.isEmpty()) {
+            E task = tasks.poll();
+            list.add(task);
+        }
+        return list;
+    }
+
+    protected static <T> void doCallback(CallableSupplier<T> supplier, T response) {
+        Collection<Callback<T>> callbacks = supplier.getCallbacks();
+        if (CollectionUtils.isNotEmpty(callbacks)) {
+            for (Callback<T> callback : callbacks) {
+                callback.call(response);
+            }
+        }
+    }
+
+    public final void runAsync(Queue<? extends Runnable> tasks) {
+        runAsync(pollAllTask(tasks));
+    }
+
+    public final void runAsync(Collection<? extends Runnable> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return;
+        }
+        List<CompletableFuture<Void>> list = new ArrayList<>(tasks.size());
+        for (Runnable task : tasks) {
+            list.add(CompletableFuture.runAsync(task, enginePool));
+        }
+        executeAsync(list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyAsync(Supplier<T>... tasks) {
+        return supplyAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(Queue<? extends Supplier<T>> tasks) {
+        return supplyAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(Collection<? extends Supplier<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        List<CompletableFuture<T>> list = new ArrayList<>(tasks.size());
+        for (Supplier<T> task : tasks) {
+            list.add(CompletableFuture.supplyAsync(task, enginePool));
+        }
+        return executeAsync(list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyCallableAsync(CallableSupplier<T>... tasks) {
+        return supplyCallableAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(Queue<? extends CallableSupplier<T>> tasks) {
+        return supplyCallableAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(Collection<? extends CallableSupplier<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        Map<CallableSupplier<T>, CompletableFuture<T>> map = new HashMap<>(tasks.size());
+        for (CallableSupplier<T> task : tasks) {
+            map.put(task, CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<CallableSupplier<T>, T> result = executeKeyedAsync(map);
+        for (Map.Entry<CallableSupplier<T>, T> entry : result.entrySet()) {
+            doCallback(entry.getKey(), entry.getValue());
+        }
+        return UtilAll.newArrayList(result.values());
+    }
+
+    @SafeVarargs
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(KeyedCallableSupplier<K, V>... tasks) {
+        return supplyKeyedCallableAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(Queue<? extends KeyedCallableSupplier<K, V>> tasks) {
+        return supplyKeyedCallableAsync(pollAllTask(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(Collection<? extends KeyedCallableSupplier<K, V>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new HashMap<>();
+        }
+        Map<K, CompletableFuture<V>> map = new HashMap<>(tasks.size());
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            map.put(task.key(), CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<K, V> result = executeKeyedAsync(map);
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            K key = task.key();
+            V response = result.get(key);
+            doCallback(task, response);
+        }
+        return result;
+    }
+
+    @SafeVarargs
+    public final <T> List<T> executeAsync(CompletableFuture<T>... tasks) {
+        return executeAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> executeAsync(Queue<CompletableFuture<T>> tasks) {
+        return executeAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> executeAsync(Collection<CompletableFuture<T>> tasks) {

Review comment:
       I think the name execute is inappropriate.

##########
File path: common/src/main/java/org/apache/rocketmq/common/message/MessageClientExt.java
##########
@@ -36,7 +36,7 @@ public String getMsgId() {
         }
     }
 
-    public void setMsgId(String msgId) {
+    @Override public void setMsgId(String msgId) {

Review comment:
       Do not in single line.

##########
File path: common/src/main/java/org/apache/rocketmq/common/protocol/header/UpdateConsumerStageOffsetRequestHeader.java
##########
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * $Id: UpdateConsumerOffsetRequestHeader.java 1835 2013-05-16 02:00:50Z vintagewang@apache.org $
+ */
+package org.apache.rocketmq.common.protocol.header;
+
+import org.apache.rocketmq.remoting.CommandCustomHeader;
+import org.apache.rocketmq.remoting.annotation.CFNotNull;
+import org.apache.rocketmq.remoting.annotation.CFNullable;

Review comment:
       unused import

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            groupByStrategy = currentStageOffsetMap.get(topic);
+        }
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());
+        if (null == groups) {
+            groups = groupByStrategy.get(strategyId);
+        }
+        groups.putIfAbsent(groupId, new AtomicInteger(0));
+        return groups.get(groupId);
+    }
+
+    private ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> convert(
+        Map<String, Map<String, Integer>> original) {
+        if (null == original) {
+            return new ConcurrentHashMap<>();
+        }
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> map = new ConcurrentHashMap<>(original.size());
+        for (Map.Entry<String, Map<String, Integer>> entry : original.entrySet()) {
+            String strategy = entry.getKey();
+            ConcurrentMap<String, AtomicInteger> temp = new ConcurrentHashMap<>();
+            Map<String, Integer> groups = entry.getValue();
+            for (Map.Entry<String, Integer> innerEntry : groups.entrySet()) {
+                String key = innerEntry.getKey();
+                Integer value = innerEntry.getValue();
+                temp.put(key, new AtomicInteger(value));
+            }
+            map.put(strategy, temp);
+        }
+        return map;
+    }
+
+    public int getCurrentLeftoverStage(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (Integer stageDefinition : summedStageDefinition) {
+                int left = stageDefinition - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return left;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndex(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (int i = 0; i < summedStageDefinition.size(); i++) {
+                int left = summedStageDefinition.get(i) - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return i;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndexAndUpdate(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId, int delta) {
+        final AtomicInteger offset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        synchronized (offset) {
+            try {
+                return getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId);
+            } finally {
+                offset.getAndAdd(delta);
+            }
+        }
+    }
+
+    @Override
+    public void updateCorePoolSize(int corePoolSize) {
+        if (corePoolSize > 0
+            && corePoolSize <= Short.MAX_VALUE
+            && corePoolSize < this.defaultMQPushConsumer.getConsumeThreadMax()) {
+            this.consumeExecutor.setCorePoolSize(corePoolSize);
+        }
+    }
+
+    @Override
+    public void incCorePoolSize() {
+    }
+
+    @Override
+    public void decCorePoolSize() {
+    }
+
+    @Override
+    public int getCorePoolSize() {
+        return this.consumeExecutor.getCorePoolSize();
+    }
+
+    @Override
+    public ConsumeMessageDirectlyResult consumeMessageDirectly(MessageExt msg, String brokerName) {
+        ConsumeMessageDirectlyResult result = new ConsumeMessageDirectlyResult();
+        result.setOrder(true);
+
+        String topic = msg.getTopic();
+        List<MessageExt> msgs = new ArrayList<MessageExt>();
+        msgs.add(msg);
+        MessageQueue mq = new MessageQueue();
+        mq.setBrokerName(brokerName);
+        mq.setTopic(topic);
+        mq.setQueueId(msg.getQueueId());
+
+        ConsumeStagedConcurrentlyContext context = new ConsumeStagedConcurrentlyContext(mq);
+
+        this.defaultMQPushConsumerImpl.resetRetryAndNamespace(msgs, this.consumerGroup);
+
+        final long beginTime = System.currentTimeMillis();
+
+        log.info("consumeMessageDirectly receive new message: {}", msg);
+
+        Set<MessageQueue> topicSubscribeInfo = this.defaultMQPushConsumerImpl.getRebalanceImpl().getTopicSubscribeInfo(topic);
+        MessageQueue messageQueue = null;
+        if (CollectionUtils.isNotEmpty(topicSubscribeInfo)) {
+            for (MessageQueue queue : topicSubscribeInfo) {
+                if (queue.getQueueId() == msg.getQueueId()) {
+                    messageQueue = queue;
+                    break;
+                }
+            }
+        }
+
+        try {
+            String strategyId = NULL;
+            try {
+                strategyId = String.valueOf(this.messageListener.computeStrategy(msg));
+            } catch (Exception e) {
+                log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+            }
+            String groupId = NULL;
+            try {
+                groupId = String.valueOf(this.messageListener.computeGroup(msg));
+            } catch (Exception e) {
+                log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+            }
+            context.setStrategyId(strategyId);
+            context.setGroupId(groupId);
+            //the test message should not update the stage offset
+            context.setStageIndex(getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId));
+            ConsumeOrderlyStatus status = this.messageListener.consumeMessage(msgs, context);
+            if (status != null) {
+                switch (status) {
+                    case COMMIT:
+                        result.setConsumeResult(CMResult.CR_COMMIT);
+                        break;
+                    case ROLLBACK:
+                        result.setConsumeResult(CMResult.CR_ROLLBACK);
+                        break;
+                    case SUCCESS:
+                        result.setConsumeResult(CMResult.CR_SUCCESS);
+                        break;
+                    case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                        result.setConsumeResult(CMResult.CR_LATER);
+                        break;
+                    default:
+                        break;
+                }
+            } else {
+                result.setConsumeResult(CMResult.CR_RETURN_NULL);
+            }
+            AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+            synchronized (currentStageOffset) {
+                int original = currentStageOffset.get();
+                this.messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                currentStageOffset.set(original);
+            }
+        } catch (Throwable e) {
+            result.setConsumeResult(CMResult.CR_THROW_EXCEPTION);
+            result.setRemark(RemotingHelper.exceptionSimpleDesc(e));
+
+            log.warn(String.format("consumeMessageDirectly exception: %s Group: %s Msgs: %s MQ: %s",
+                RemotingHelper.exceptionSimpleDesc(e),
+                ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                msgs,
+                mq), e);
+        }
+        result.setAutoCommit(context.isAutoCommit());
+        result.setSpentTimeMills(System.currentTimeMillis() - beginTime);
+
+        log.info("consumeMessageDirectly Result: {}", result);
+
+        return result;
+    }
+
+    @Override
+    public void submitConsumeRequest(
+        final List<MessageExt> msgs,
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final boolean dispatchToConsume) {
+        if (dispatchToConsume) {
+            DispatchRequest dispatchRequest = new DispatchRequest(processQueue, messageQueue);
+            this.dispatchExecutor.submit(dispatchRequest);
+        }
+    }
+
+    public synchronized void lockMQPeriodically() {
+        if (!this.stopped) {
+            this.defaultMQPushConsumerImpl.getRebalanceImpl().lockAll();
+        }
+    }
+
+    public void tryLockLaterAndReconsume(final MessageQueue mq, final ProcessQueue processQueue,
+        final long delayMills) {
+        this.scheduledExecutorService.schedule(new Runnable() {
+            @Override
+            public void run() {
+                boolean lockOK = ConsumeMessageStagedConcurrentlyService.this.lockOneMQ(mq);
+                if (lockOK) {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 10);
+                } else {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 3000);
+                }
+            }
+        }, delayMills, TimeUnit.MILLISECONDS);
+    }
+
+    public synchronized boolean lockOneMQ(final MessageQueue mq) {
+        if (!this.stopped) {
+            return this.defaultMQPushConsumerImpl.getRebalanceImpl().lock(mq);
+        }
+
+        return false;
+    }
+
+    private void submitConsumeRequestLater(
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final long suspendTimeMillis
+    ) {
+        long timeMillis = suspendTimeMillis;
+        if (timeMillis == -1) {
+            timeMillis = this.defaultMQPushConsumer.getSuspendCurrentQueueTimeMillis();
+        }
+
+        if (timeMillis < 10) {
+            timeMillis = 10;
+        } else if (timeMillis > 30000) {
+            timeMillis = 30000;
+        }
+
+        this.scheduledExecutorService.schedule(new Runnable() {
+
+            @Override
+            public void run() {
+                ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequest(null, processQueue, messageQueue, true);
+            }
+        }, timeMillis, TimeUnit.MILLISECONDS);
+    }
+
+    public boolean processConsumeResult(
+        final String strategyId,
+        final String groupId,
+        final List<MessageExt> msgs,
+        final ConsumeOrderlyStatus status,
+        final ConsumeStagedConcurrentlyContext context,
+        final ConsumeRequest consumeRequest
+    ) {
+        MessageQueue messageQueue = consumeRequest.getMessageQueue();
+        String topic = messageQueue.getTopic();
+        AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        boolean continueConsume = true;
+        long commitOffset = -1L;
+        int commitStageOffset = -1;
+        if (context.isAutoCommit()) {
+            switch (status) {
+                case COMMIT:
+                case ROLLBACK:
+                    log.warn("the message queue consume result is illegal, we think you want to ack these message {}",
+                        messageQueue);
+                case SUCCESS:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    } else {
+                        commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                        commitStageOffset = currentStageOffset.get();
+                    }
+                    break;
+                default:
+                    break;
+            }
+        } else {
+            switch (status) {
+                case SUCCESS:
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case COMMIT:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    break;
+                case ROLLBACK:
+                    consumeRequest.getProcessQueue().rollback();
+                    this.submitConsumeRequestLater(
+                        consumeRequest.getProcessQueue(),
+                        messageQueue,
+                        context.getSuspendCurrentQueueTimeMillis());
+                    continueConsume = false;
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    }
+                    break;
+                default:
+                    break;
+            }
+        }
+
+        if (commitOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(messageQueue, commitOffset, false);
+        }
+
+        if (stageOffsetStore != null && commitStageOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            synchronized (currentStageOffset) {
+                messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                //prevent users from resetting the value of currentStageOffset to a value less than 0
+                currentStageOffset.set(Math.max(0, currentStageOffset.get()));
+            }
+            commitStageOffset = currentStageOffset.get();
+            if (!consumeRequest.getProcessQueue().isDropped()) {
+                stageOffsetStore.updateStageOffset(messageQueue, strategyId, groupId, commitStageOffset, false);
+            }
+        }
+
+        return continueConsume;
+    }
+
+    public ConsumerStatsManager getConsumerStatsManager() {
+        return this.defaultMQPushConsumerImpl.getConsumerStatsManager();
+    }
+
+    private int getMaxReconsumeTimes() {
+        // default reconsume times: Integer.MAX_VALUE
+        if (this.defaultMQPushConsumer.getMaxReconsumeTimes() == -1) {
+            return Integer.MAX_VALUE;
+        } else {
+            return this.defaultMQPushConsumer.getMaxReconsumeTimes();
+        }
+    }
+
+    private boolean checkReconsumeTimes(List<MessageExt> msgs) {
+        boolean suspend = false;
+        if (msgs != null && !msgs.isEmpty()) {
+            for (MessageExt msg : msgs) {
+                if (msg.getReconsumeTimes() >= getMaxReconsumeTimes()) {
+                    MessageAccessor.setReconsumeTime(msg, String.valueOf(msg.getReconsumeTimes()));
+                    if (!sendMessageBack(msg)) {
+                        suspend = true;
+                        msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                    }
+                } else {
+                    suspend = true;
+                    msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                }
+            }
+        }
+        return suspend;
+    }
+
+    public boolean sendMessageBack(final MessageExt msg) {
+        try {
+            // max reconsume times exceeded then send to dead letter queue.
+            Message newMsg = new Message(MixAll.getRetryTopic(this.defaultMQPushConsumer.getConsumerGroup()), msg.getBody());
+            String originMsgId = MessageAccessor.getOriginMessageId(msg);
+            MessageAccessor.setOriginMessageId(newMsg, UtilAll.isBlank(originMsgId) ? msg.getMsgId() : originMsgId);
+            newMsg.setFlag(msg.getFlag());
+            MessageAccessor.setProperties(newMsg, msg.getProperties());
+            MessageAccessor.putProperty(newMsg, MessageConst.PROPERTY_RETRY_TOPIC, msg.getTopic());
+            MessageAccessor.setReconsumeTime(newMsg, String.valueOf(msg.getReconsumeTimes()));
+            MessageAccessor.setMaxReconsumeTimes(newMsg, String.valueOf(getMaxReconsumeTimes()));
+            MessageAccessor.clearProperty(newMsg, MessageConst.PROPERTY_TRANSACTION_PREPARED);
+            newMsg.setDelayTimeLevel(3 + msg.getReconsumeTimes());
+
+            this.defaultMQPushConsumer.getDefaultMQPushConsumerImpl().getmQClientFactory().getDefaultMQProducer().send(newMsg);
+            return true;
+        } catch (Exception e) {
+            log.error("sendMessageBack exception, group: " + this.consumerGroup + " msg: " + msg.toString(), e);
+        }
+
+        return false;
+    }
+
+    public void resetNamespace(final List<MessageExt> msgs) {
+        for (MessageExt msg : msgs) {
+            if (StringUtils.isNotEmpty(this.defaultMQPushConsumer.getNamespace())) {
+                msg.setTopic(NamespaceUtil.withoutNamespace(msg.getTopic(), this.defaultMQPushConsumer.getNamespace()));
+            }
+        }
+    }
+
+    class DispatchRequest implements Runnable {
+        private final ProcessQueue processQueue;
+        private final MessageQueue messageQueue;
+
+        public DispatchRequest(ProcessQueue processQueue,
+            MessageQueue messageQueue) {
+            this.processQueue = processQueue;
+            this.messageQueue = messageQueue;
+        }
+
+        @Override
+        public void run() {
+            if (this.processQueue.isDropped()) {
+                log.warn("run, the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                return;
+            }
+
+            String topic = this.messageQueue.getTopic();
+            final Object objLock = messageQueueLock.fetchLockObject(this.messageQueue);
+            synchronized (objLock) {
+                if (MessageModel.BROADCASTING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                    || (this.processQueue.isLocked() && !this.processQueue.isLockExpired())) {
+                    final long beginTime = System.currentTimeMillis();
+                    for (final AtomicBoolean continueConsume = new AtomicBoolean(true); continueConsume.get(); ) {
+                        if (this.processQueue.isDropped()) {
+                            log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && !this.processQueue.isLocked()) {
+                            log.warn("the message queue not locked, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && this.processQueue.isLockExpired()) {
+                            log.warn("the message queue lock expired, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        long interval = System.currentTimeMillis() - beginTime;
+                        if (interval > MAX_TIME_CONSUME_CONTINUOUSLY) {
+                            ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, messageQueue, 10);
+                            break;
+                        }
+
+                        final int consumeBatchSize =
+                            ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
+                        int takeSize = ConsumeMessageStagedConcurrentlyService.this.pullBatchSize * consumeBatchSize;

Review comment:
       `pullBatchSize * consumeBatchSize` is unreasonable

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            groupByStrategy = currentStageOffsetMap.get(topic);
+        }
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());
+        if (null == groups) {
+            groups = groupByStrategy.get(strategyId);
+        }
+        groups.putIfAbsent(groupId, new AtomicInteger(0));
+        return groups.get(groupId);
+    }
+
+    private ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> convert(
+        Map<String, Map<String, Integer>> original) {
+        if (null == original) {
+            return new ConcurrentHashMap<>();
+        }
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> map = new ConcurrentHashMap<>(original.size());
+        for (Map.Entry<String, Map<String, Integer>> entry : original.entrySet()) {
+            String strategy = entry.getKey();
+            ConcurrentMap<String, AtomicInteger> temp = new ConcurrentHashMap<>();
+            Map<String, Integer> groups = entry.getValue();
+            for (Map.Entry<String, Integer> innerEntry : groups.entrySet()) {
+                String key = innerEntry.getKey();
+                Integer value = innerEntry.getValue();
+                temp.put(key, new AtomicInteger(value));
+            }
+            map.put(strategy, temp);
+        }
+        return map;
+    }
+
+    public int getCurrentLeftoverStage(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (Integer stageDefinition : summedStageDefinition) {
+                int left = stageDefinition - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return left;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndex(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (int i = 0; i < summedStageDefinition.size(); i++) {
+                int left = summedStageDefinition.get(i) - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return i;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndexAndUpdate(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId, int delta) {
+        final AtomicInteger offset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        synchronized (offset) {
+            try {
+                return getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId);
+            } finally {
+                offset.getAndAdd(delta);
+            }
+        }
+    }
+
+    @Override
+    public void updateCorePoolSize(int corePoolSize) {
+        if (corePoolSize > 0
+            && corePoolSize <= Short.MAX_VALUE
+            && corePoolSize < this.defaultMQPushConsumer.getConsumeThreadMax()) {
+            this.consumeExecutor.setCorePoolSize(corePoolSize);
+        }
+    }
+
+    @Override
+    public void incCorePoolSize() {
+    }
+
+    @Override
+    public void decCorePoolSize() {
+    }
+
+    @Override
+    public int getCorePoolSize() {
+        return this.consumeExecutor.getCorePoolSize();
+    }
+
+    @Override
+    public ConsumeMessageDirectlyResult consumeMessageDirectly(MessageExt msg, String brokerName) {
+        ConsumeMessageDirectlyResult result = new ConsumeMessageDirectlyResult();
+        result.setOrder(true);
+
+        String topic = msg.getTopic();
+        List<MessageExt> msgs = new ArrayList<MessageExt>();
+        msgs.add(msg);
+        MessageQueue mq = new MessageQueue();
+        mq.setBrokerName(brokerName);
+        mq.setTopic(topic);
+        mq.setQueueId(msg.getQueueId());
+
+        ConsumeStagedConcurrentlyContext context = new ConsumeStagedConcurrentlyContext(mq);
+
+        this.defaultMQPushConsumerImpl.resetRetryAndNamespace(msgs, this.consumerGroup);
+
+        final long beginTime = System.currentTimeMillis();
+
+        log.info("consumeMessageDirectly receive new message: {}", msg);
+
+        Set<MessageQueue> topicSubscribeInfo = this.defaultMQPushConsumerImpl.getRebalanceImpl().getTopicSubscribeInfo(topic);
+        MessageQueue messageQueue = null;
+        if (CollectionUtils.isNotEmpty(topicSubscribeInfo)) {
+            for (MessageQueue queue : topicSubscribeInfo) {
+                if (queue.getQueueId() == msg.getQueueId()) {
+                    messageQueue = queue;
+                    break;
+                }
+            }
+        }
+
+        try {
+            String strategyId = NULL;
+            try {
+                strategyId = String.valueOf(this.messageListener.computeStrategy(msg));
+            } catch (Exception e) {
+                log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+            }
+            String groupId = NULL;
+            try {
+                groupId = String.valueOf(this.messageListener.computeGroup(msg));
+            } catch (Exception e) {
+                log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+            }
+            context.setStrategyId(strategyId);
+            context.setGroupId(groupId);
+            //the test message should not update the stage offset
+            context.setStageIndex(getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId));
+            ConsumeOrderlyStatus status = this.messageListener.consumeMessage(msgs, context);
+            if (status != null) {
+                switch (status) {
+                    case COMMIT:
+                        result.setConsumeResult(CMResult.CR_COMMIT);
+                        break;
+                    case ROLLBACK:
+                        result.setConsumeResult(CMResult.CR_ROLLBACK);
+                        break;
+                    case SUCCESS:
+                        result.setConsumeResult(CMResult.CR_SUCCESS);
+                        break;
+                    case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                        result.setConsumeResult(CMResult.CR_LATER);
+                        break;
+                    default:
+                        break;
+                }
+            } else {
+                result.setConsumeResult(CMResult.CR_RETURN_NULL);
+            }
+            AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+            synchronized (currentStageOffset) {
+                int original = currentStageOffset.get();
+                this.messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                currentStageOffset.set(original);
+            }
+        } catch (Throwable e) {
+            result.setConsumeResult(CMResult.CR_THROW_EXCEPTION);
+            result.setRemark(RemotingHelper.exceptionSimpleDesc(e));
+
+            log.warn(String.format("consumeMessageDirectly exception: %s Group: %s Msgs: %s MQ: %s",
+                RemotingHelper.exceptionSimpleDesc(e),
+                ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                msgs,
+                mq), e);
+        }
+        result.setAutoCommit(context.isAutoCommit());
+        result.setSpentTimeMills(System.currentTimeMillis() - beginTime);
+
+        log.info("consumeMessageDirectly Result: {}", result);
+
+        return result;
+    }
+
+    @Override
+    public void submitConsumeRequest(
+        final List<MessageExt> msgs,
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final boolean dispatchToConsume) {
+        if (dispatchToConsume) {
+            DispatchRequest dispatchRequest = new DispatchRequest(processQueue, messageQueue);
+            this.dispatchExecutor.submit(dispatchRequest);
+        }
+    }
+
+    public synchronized void lockMQPeriodically() {
+        if (!this.stopped) {
+            this.defaultMQPushConsumerImpl.getRebalanceImpl().lockAll();
+        }
+    }
+
+    public void tryLockLaterAndReconsume(final MessageQueue mq, final ProcessQueue processQueue,
+        final long delayMills) {
+        this.scheduledExecutorService.schedule(new Runnable() {
+            @Override
+            public void run() {
+                boolean lockOK = ConsumeMessageStagedConcurrentlyService.this.lockOneMQ(mq);
+                if (lockOK) {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 10);
+                } else {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 3000);
+                }
+            }
+        }, delayMills, TimeUnit.MILLISECONDS);
+    }
+
+    public synchronized boolean lockOneMQ(final MessageQueue mq) {
+        if (!this.stopped) {
+            return this.defaultMQPushConsumerImpl.getRebalanceImpl().lock(mq);
+        }
+
+        return false;
+    }
+
+    private void submitConsumeRequestLater(
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final long suspendTimeMillis
+    ) {
+        long timeMillis = suspendTimeMillis;
+        if (timeMillis == -1) {
+            timeMillis = this.defaultMQPushConsumer.getSuspendCurrentQueueTimeMillis();
+        }
+
+        if (timeMillis < 10) {
+            timeMillis = 10;
+        } else if (timeMillis > 30000) {
+            timeMillis = 30000;
+        }
+
+        this.scheduledExecutorService.schedule(new Runnable() {
+
+            @Override
+            public void run() {
+                ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequest(null, processQueue, messageQueue, true);
+            }
+        }, timeMillis, TimeUnit.MILLISECONDS);
+    }
+
+    public boolean processConsumeResult(
+        final String strategyId,
+        final String groupId,
+        final List<MessageExt> msgs,
+        final ConsumeOrderlyStatus status,
+        final ConsumeStagedConcurrentlyContext context,
+        final ConsumeRequest consumeRequest
+    ) {
+        MessageQueue messageQueue = consumeRequest.getMessageQueue();
+        String topic = messageQueue.getTopic();
+        AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        boolean continueConsume = true;
+        long commitOffset = -1L;
+        int commitStageOffset = -1;
+        if (context.isAutoCommit()) {
+            switch (status) {
+                case COMMIT:
+                case ROLLBACK:
+                    log.warn("the message queue consume result is illegal, we think you want to ack these message {}",
+                        messageQueue);
+                case SUCCESS:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    } else {
+                        commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                        commitStageOffset = currentStageOffset.get();
+                    }
+                    break;
+                default:
+                    break;
+            }
+        } else {
+            switch (status) {
+                case SUCCESS:
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case COMMIT:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    break;
+                case ROLLBACK:
+                    consumeRequest.getProcessQueue().rollback();
+                    this.submitConsumeRequestLater(
+                        consumeRequest.getProcessQueue(),
+                        messageQueue,
+                        context.getSuspendCurrentQueueTimeMillis());
+                    continueConsume = false;
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    }
+                    break;
+                default:
+                    break;
+            }
+        }
+
+        if (commitOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(messageQueue, commitOffset, false);
+        }
+
+        if (stageOffsetStore != null && commitStageOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            synchronized (currentStageOffset) {
+                messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                //prevent users from resetting the value of currentStageOffset to a value less than 0
+                currentStageOffset.set(Math.max(0, currentStageOffset.get()));
+            }
+            commitStageOffset = currentStageOffset.get();
+            if (!consumeRequest.getProcessQueue().isDropped()) {
+                stageOffsetStore.updateStageOffset(messageQueue, strategyId, groupId, commitStageOffset, false);
+            }
+        }
+
+        return continueConsume;
+    }
+
+    public ConsumerStatsManager getConsumerStatsManager() {
+        return this.defaultMQPushConsumerImpl.getConsumerStatsManager();
+    }
+
+    private int getMaxReconsumeTimes() {
+        // default reconsume times: Integer.MAX_VALUE
+        if (this.defaultMQPushConsumer.getMaxReconsumeTimes() == -1) {
+            return Integer.MAX_VALUE;
+        } else {
+            return this.defaultMQPushConsumer.getMaxReconsumeTimes();
+        }
+    }
+
+    private boolean checkReconsumeTimes(List<MessageExt> msgs) {
+        boolean suspend = false;
+        if (msgs != null && !msgs.isEmpty()) {
+            for (MessageExt msg : msgs) {
+                if (msg.getReconsumeTimes() >= getMaxReconsumeTimes()) {
+                    MessageAccessor.setReconsumeTime(msg, String.valueOf(msg.getReconsumeTimes()));
+                    if (!sendMessageBack(msg)) {
+                        suspend = true;
+                        msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                    }
+                } else {
+                    suspend = true;
+                    msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                }
+            }
+        }
+        return suspend;
+    }
+
+    public boolean sendMessageBack(final MessageExt msg) {
+        try {
+            // max reconsume times exceeded then send to dead letter queue.
+            Message newMsg = new Message(MixAll.getRetryTopic(this.defaultMQPushConsumer.getConsumerGroup()), msg.getBody());
+            String originMsgId = MessageAccessor.getOriginMessageId(msg);
+            MessageAccessor.setOriginMessageId(newMsg, UtilAll.isBlank(originMsgId) ? msg.getMsgId() : originMsgId);
+            newMsg.setFlag(msg.getFlag());
+            MessageAccessor.setProperties(newMsg, msg.getProperties());
+            MessageAccessor.putProperty(newMsg, MessageConst.PROPERTY_RETRY_TOPIC, msg.getTopic());
+            MessageAccessor.setReconsumeTime(newMsg, String.valueOf(msg.getReconsumeTimes()));
+            MessageAccessor.setMaxReconsumeTimes(newMsg, String.valueOf(getMaxReconsumeTimes()));
+            MessageAccessor.clearProperty(newMsg, MessageConst.PROPERTY_TRANSACTION_PREPARED);
+            newMsg.setDelayTimeLevel(3 + msg.getReconsumeTimes());
+
+            this.defaultMQPushConsumer.getDefaultMQPushConsumerImpl().getmQClientFactory().getDefaultMQProducer().send(newMsg);
+            return true;
+        } catch (Exception e) {
+            log.error("sendMessageBack exception, group: " + this.consumerGroup + " msg: " + msg.toString(), e);
+        }
+
+        return false;
+    }
+
+    public void resetNamespace(final List<MessageExt> msgs) {
+        for (MessageExt msg : msgs) {
+            if (StringUtils.isNotEmpty(this.defaultMQPushConsumer.getNamespace())) {
+                msg.setTopic(NamespaceUtil.withoutNamespace(msg.getTopic(), this.defaultMQPushConsumer.getNamespace()));
+            }
+        }
+    }
+
+    class DispatchRequest implements Runnable {
+        private final ProcessQueue processQueue;
+        private final MessageQueue messageQueue;
+
+        public DispatchRequest(ProcessQueue processQueue,
+            MessageQueue messageQueue) {
+            this.processQueue = processQueue;
+            this.messageQueue = messageQueue;
+        }
+
+        @Override
+        public void run() {
+            if (this.processQueue.isDropped()) {
+                log.warn("run, the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                return;
+            }
+
+            String topic = this.messageQueue.getTopic();
+            final Object objLock = messageQueueLock.fetchLockObject(this.messageQueue);
+            synchronized (objLock) {
+                if (MessageModel.BROADCASTING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                    || (this.processQueue.isLocked() && !this.processQueue.isLockExpired())) {
+                    final long beginTime = System.currentTimeMillis();
+                    for (final AtomicBoolean continueConsume = new AtomicBoolean(true); continueConsume.get(); ) {
+                        if (this.processQueue.isDropped()) {
+                            log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && !this.processQueue.isLocked()) {
+                            log.warn("the message queue not locked, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && this.processQueue.isLockExpired()) {
+                            log.warn("the message queue lock expired, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        long interval = System.currentTimeMillis() - beginTime;
+                        if (interval > MAX_TIME_CONSUME_CONTINUOUSLY) {
+                            ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, messageQueue, 10);
+                            break;
+                        }
+
+                        final int consumeBatchSize =
+                            ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
+                        int takeSize = ConsumeMessageStagedConcurrentlyService.this.pullBatchSize * consumeBatchSize;
+                        List<MessageExt> msgs = this.processQueue.takeMessages(takeSize);
+                        if (!msgs.isEmpty()) {
+                            //ensure that the stage definitions is up to date
+                            ConsumeMessageStagedConcurrentlyService.this.refreshStageDefinition();
+                            Map<String, Map<String, List<MessageExt>>> messageGroupByStrategyThenGroup = removeAndRePutAllMessagesInTheNextStage(topic, msgs);
+                            for (Map.Entry<String, Map<String, List<MessageExt>>> entry : messageGroupByStrategyThenGroup.entrySet()) {
+                                String strategyId = entry.getKey();
+                                Map<String, List<MessageExt>> messageGroups = entry.getValue();
+                                for (Map.Entry<String, List<MessageExt>> innerEntry : messageGroups.entrySet()) {
+                                    String groupId = innerEntry.getKey();
+                                    List<MessageExt> messagesCanConsume = innerEntry.getValue();
+                                    List<List<MessageExt>> lists = UtilAll.partition(messagesCanConsume, consumeBatchSize);
+                                    for (final List<MessageExt> list : lists) {
+                                        defaultMQPushConsumerImpl.resetRetryAndNamespace(list, defaultMQPushConsumer.getConsumerGroup());
+                                        int currentLeftoverStageIndex =
+                                            ConsumeMessageStagedConcurrentlyService.this.getCurrentLeftoverStageIndexAndUpdate(this.messageQueue, topic, strategyId, groupId, list.size());
+                                        ConsumeRequest consumeRequest = new ConsumeRequest(list, this.processQueue, this.messageQueue, continueConsume, currentLeftoverStageIndex, strategyId, groupId);
+                                        if (currentLeftoverStageIndex >= 0) {
+                                            engine.runPriorityAsync(currentLeftoverStageIndex, consumeRequest);
+                                        } else {
+                                            //If the strategy Id is null, it will go in this case
+                                            engine.runPriorityAsync(consumeRequest);
+                                        }
+                                    }
+                                }
+                            }
+                        } else {
+                            continueConsume.set(false);
+                        }
+                    }
+                } else {
+                    if (this.processQueue.isDropped()) {
+                        log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                        return;
+                    }
+
+                    ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 100);
+                }
+            }
+        }
+
+        private Map<String, Map<String, List<MessageExt>>> removeAndRePutAllMessagesInTheNextStage(String topic,
+            List<MessageExt> msgs) {
+            Map<String, Map<String, List<MessageExt>>> messageGroupByStrategyThenGroup = new LinkedHashMap<>();
+            for (MessageExt message : msgs) {
+                String strategyId = NULL;
+                try {
+                    strategyId = String.valueOf(messageListener.computeStrategy(message));
+                } catch (Exception e) {
+                    log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+                }
+                String groupId = NULL;
+                try {
+                    groupId = String.valueOf(messageListener.computeGroup(message));
+                } catch (Exception e) {
+                    log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+                }
+                //null strategy means direct concurrency
+                Map<String, List<MessageExt>> messageGroupByStrategy = messageGroupByStrategyThenGroup.putIfAbsent(strategyId, new LinkedHashMap<>());
+                if (null == messageGroupByStrategy) {
+                    messageGroupByStrategy = messageGroupByStrategyThenGroup.get(strategyId);
+                }
+                List<MessageExt> messages = messageGroupByStrategy.putIfAbsent(groupId, new CopyOnWriteArrayList<>());
+                if (null == messages) {
+                    messages = messageGroupByStrategy.get(groupId);
+                }
+                messages.add(message);
+            }
+            for (Map.Entry<String, Map<String, List<MessageExt>>> entry : messageGroupByStrategyThenGroup.entrySet()) {
+                String strategyId = entry.getKey();
+                Map<String, List<MessageExt>> messageGroupByStrategy = entry.getValue();
+                for (Map.Entry<String, List<MessageExt>> innerEntry : messageGroupByStrategy.entrySet()) {
+                    String groupId = innerEntry.getKey();
+                    List<MessageExt> messages = innerEntry.getValue();
+                    int leftoverStage = ConsumeMessageStagedConcurrentlyService.this.getCurrentLeftoverStage(this.messageQueue, topic, strategyId, groupId);
+                    int size = messages.size();
+                    if (leftoverStage < 0 || size <= leftoverStage) {
+                        continue;
+                    }
+                    List<MessageExt> list = messages.subList(leftoverStage, size);
+                    //the messages must be put back here
+                    this.processQueue.putMessage(list);
+                    messages.removeAll(list);
+                }
+            }
+            return messageGroupByStrategyThenGroup;
+        }
+    }
+
+    class ConsumeRequest implements Runnable {
+        private final List<MessageExt> msgs;
+        private final ProcessQueue processQueue;
+        private final MessageQueue messageQueue;
+        private final AtomicBoolean continueConsume;
+        private final int currentLeftoverStageIndex;
+        private final String strategyId;
+        private final String groupId;
+
+        public ConsumeRequest(List<MessageExt> msgs,
+            ProcessQueue processQueue,
+            MessageQueue messageQueue,
+            AtomicBoolean continueConsume,
+            int currentLeftoverStage,
+            String strategyId,
+            String groupId) {
+            this.msgs = msgs;
+            this.processQueue = processQueue;
+            this.messageQueue = messageQueue;
+            this.continueConsume = continueConsume;
+            this.currentLeftoverStageIndex = currentLeftoverStage;
+            this.strategyId = strategyId;
+            this.groupId = groupId;
+        }
+
+        public ProcessQueue getProcessQueue() {
+            return processQueue;
+        }
+
+        public MessageQueue getMessageQueue() {
+            return messageQueue;
+        }
+
+        @Override
+        public void run() {
+            ConsumeStagedConcurrentlyContext context = new ConsumeStagedConcurrentlyContext(this.messageQueue);
+            context.setStrategyId(strategyId);
+            context.setGroupId(groupId);
+            context.setStageIndex(currentLeftoverStageIndex);
+            ConsumeOrderlyStatus status = null;
+
+            ConsumeMessageContext consumeMessageContext = null;
+            if (ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
+                consumeMessageContext = new ConsumeMessageContext();
+                consumeMessageContext
+                    .setConsumerGroup(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumer.getConsumerGroup());
+                consumeMessageContext.setNamespace(defaultMQPushConsumer.getNamespace());
+                consumeMessageContext.setMq(messageQueue);
+                consumeMessageContext.setMsgList(msgs);
+                consumeMessageContext.setSuccess(false);
+                // init the consume context type
+                consumeMessageContext.setProps(new HashMap<String, String>());
+                ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.executeHookBefore(consumeMessageContext);
+            }
+
+            long beginTimestamp = System.currentTimeMillis();
+            ConsumeReturnType returnType = ConsumeReturnType.SUCCESS;
+            boolean hasException = false;
+            try {
+                this.processQueue.getConsumeLock().lock();
+                if (this.processQueue.isDropped()) {
+                    log.warn("consumeMessage, the message queue not be able to consume, because it's dropped. {}",
+                        this.messageQueue);
+                    continueConsume.set(false);
+                    return;
+                }
+                for (MessageExt msg : msgs) {
+                    MessageAccessor.setConsumeStartTimeStamp(msg, String.valueOf(System.currentTimeMillis()));
+                }
+                status = messageListener.consumeMessage(Collections.unmodifiableList(msgs), context);
+            } catch (Throwable e) {
+                log.warn("consumeMessage exception: {} Group: {} Msgs: {} MQ: {}",
+                    RemotingHelper.exceptionSimpleDesc(e),
+                    ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                    msgs,
+                    messageQueue);
+                hasException = true;
+            } finally {
+                this.processQueue.getConsumeLock().unlock();
+            }
+
+            if (null == status
+                || ConsumeOrderlyStatus.ROLLBACK == status
+                || ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT == status) {
+                log.warn("consumeMessage Orderly return not OK, Group: {} Msgs: {} MQ: {}",
+                    ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                    msgs,
+                    messageQueue);
+            }
+
+            long consumeRT = System.currentTimeMillis() - beginTimestamp;
+            if (null == status) {
+                if (hasException) {
+                    returnType = ConsumeReturnType.EXCEPTION;
+                } else {
+                    returnType = ConsumeReturnType.RETURNNULL;
+                }
+            } else if (consumeRT >= defaultMQPushConsumer.getConsumeTimeout() * 60 * 1000) {
+                returnType = ConsumeReturnType.TIME_OUT;
+            } else if (ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT == status) {
+                returnType = ConsumeReturnType.FAILED;
+            } else if (ConsumeOrderlyStatus.SUCCESS == status) {
+                returnType = ConsumeReturnType.SUCCESS;
+            }
+
+            if (ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
+                consumeMessageContext.getProps().put(MixAll.CONSUME_CONTEXT_TYPE, returnType.name());
+            }
+
+            if (null == status) {
+                status = ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT;
+            }
+
+            if (ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
+                consumeMessageContext.setStatus(status.toString());
+                consumeMessageContext
+                    .setSuccess(ConsumeOrderlyStatus.SUCCESS == status || ConsumeOrderlyStatus.COMMIT == status);
+                ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.executeHookAfter(consumeMessageContext);
+            }
+
+            ConsumeMessageStagedConcurrentlyService.this.getConsumerStatsManager()
+                .incConsumeRT(ConsumeMessageStagedConcurrentlyService.this.consumerGroup, messageQueue.getTopic(), consumeRT);
+            continueConsume.set(ConsumeMessageStagedConcurrentlyService.this.processConsumeResult(strategyId, groupId, msgs, status, context, this)

Review comment:
       I think below code would be better.
   
   ```java
   if (!ConsumeMessageStagedConcurrentlyService.this.processConsumeResult(strategyId, groupId, msgs, status, context, this)) {
           continueConsume.set(false);
   }
   ```

##########
File path: common/src/main/java/org/apache/rocketmq/common/concurrent/ConcurrentEngine.java
##########
@@ -0,0 +1,463 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.common.concurrent;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Supplier;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.constant.LoggerName;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.logging.InternalLoggerFactory;
+
+public class ConcurrentEngine {
+
+    protected static final InternalLogger log = InternalLoggerFactory.getLogger(LoggerName.COMMON_LOGGER_NAME);
+
+    protected final ExecutorService enginePool;
+
+    public ConcurrentEngine() {
+        this.enginePool = ForkJoinPool.commonPool();
+    }
+
+    public ConcurrentEngine(ExecutorService enginePool) {
+        this.enginePool = enginePool;
+    }
+
+    public final void runAsync(Runnable... tasks) {
+        runAsync(UtilAll.newArrayList(tasks));
+    }
+
+    protected static <E> List<E> pollAllTask(Queue<E> tasks) {
+        //avoid list expansion
+        List<E> list = new LinkedList<>();
+        while (tasks != null && !tasks.isEmpty()) {
+            E task = tasks.poll();
+            list.add(task);
+        }
+        return list;
+    }
+
+    protected static <T> void doCallback(CallableSupplier<T> supplier, T response) {
+        Collection<Callback<T>> callbacks = supplier.getCallbacks();
+        if (CollectionUtils.isNotEmpty(callbacks)) {
+            for (Callback<T> callback : callbacks) {
+                callback.call(response);
+            }
+        }
+    }
+
+    public final void runAsync(Queue<? extends Runnable> tasks) {
+        runAsync(pollAllTask(tasks));
+    }
+
+    public final void runAsync(Collection<? extends Runnable> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return;
+        }
+        List<CompletableFuture<Void>> list = new ArrayList<>(tasks.size());
+        for (Runnable task : tasks) {
+            list.add(CompletableFuture.runAsync(task, enginePool));
+        }
+        executeAsync(list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyAsync(Supplier<T>... tasks) {
+        return supplyAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(Queue<? extends Supplier<T>> tasks) {
+        return supplyAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(Collection<? extends Supplier<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        List<CompletableFuture<T>> list = new ArrayList<>(tasks.size());
+        for (Supplier<T> task : tasks) {
+            list.add(CompletableFuture.supplyAsync(task, enginePool));
+        }
+        return executeAsync(list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyCallableAsync(CallableSupplier<T>... tasks) {
+        return supplyCallableAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(Queue<? extends CallableSupplier<T>> tasks) {
+        return supplyCallableAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(Collection<? extends CallableSupplier<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        Map<CallableSupplier<T>, CompletableFuture<T>> map = new HashMap<>(tasks.size());
+        for (CallableSupplier<T> task : tasks) {
+            map.put(task, CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<CallableSupplier<T>, T> result = executeKeyedAsync(map);
+        for (Map.Entry<CallableSupplier<T>, T> entry : result.entrySet()) {
+            doCallback(entry.getKey(), entry.getValue());
+        }
+        return UtilAll.newArrayList(result.values());
+    }
+
+    @SafeVarargs
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(KeyedCallableSupplier<K, V>... tasks) {
+        return supplyKeyedCallableAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(Queue<? extends KeyedCallableSupplier<K, V>> tasks) {
+        return supplyKeyedCallableAsync(pollAllTask(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(Collection<? extends KeyedCallableSupplier<K, V>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new HashMap<>();
+        }
+        Map<K, CompletableFuture<V>> map = new HashMap<>(tasks.size());
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            map.put(task.key(), CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<K, V> result = executeKeyedAsync(map);
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            K key = task.key();
+            V response = result.get(key);
+            doCallback(task, response);
+        }
+        return result;
+    }
+
+    @SafeVarargs
+    public final <T> List<T> executeAsync(CompletableFuture<T>... tasks) {
+        return executeAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> executeAsync(Queue<CompletableFuture<T>> tasks) {
+        return executeAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> executeAsync(Collection<CompletableFuture<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks)) {
+            return new ArrayList<>();
+        }
+        try {
+            CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0])).join();
+        } catch (Exception e) {
+            log.error("tasks executeAsync failed with exception:{}", e.getMessage(), e);
+            e.printStackTrace();
+        }
+        return getResultIgnoreException(tasks);
+    }
+
+    public final <T> List<T> getResultIgnoreException(Collection<CompletableFuture<T>> tasks) {
+        List<T> result = new ArrayList<>(tasks.size());
+        for (CompletableFuture<T> completableFuture : tasks) {
+            if (null == completableFuture) {
+                continue;
+            }
+            try {
+                T response = completableFuture.get();
+                if (null != response) {
+                    result.add(response);
+                }
+            } catch (Exception e) {
+                log.error("task:{} execute failed with exception:{}", completableFuture, e.getMessage(), e);
+            }
+        }
+        return result;

Review comment:
       `result.size()` and `tasks.size()` may not be equal, is this acceptable?

##########
File path: common/src/main/java/org/apache/rocketmq/common/concurrent/ConcurrentEngine.java
##########
@@ -0,0 +1,463 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.common.concurrent;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Supplier;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.constant.LoggerName;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.logging.InternalLoggerFactory;
+
+public class ConcurrentEngine {
+
+    protected static final InternalLogger log = InternalLoggerFactory.getLogger(LoggerName.COMMON_LOGGER_NAME);
+
+    protected final ExecutorService enginePool;
+
+    public ConcurrentEngine() {
+        this.enginePool = ForkJoinPool.commonPool();
+    }
+
+    public ConcurrentEngine(ExecutorService enginePool) {
+        this.enginePool = enginePool;
+    }
+
+    public final void runAsync(Runnable... tasks) {
+        runAsync(UtilAll.newArrayList(tasks));
+    }
+
+    protected static <E> List<E> pollAllTask(Queue<E> tasks) {
+        //avoid list expansion
+        List<E> list = new LinkedList<>();
+        while (tasks != null && !tasks.isEmpty()) {
+            E task = tasks.poll();
+            list.add(task);
+        }
+        return list;
+    }
+
+    protected static <T> void doCallback(CallableSupplier<T> supplier, T response) {
+        Collection<Callback<T>> callbacks = supplier.getCallbacks();
+        if (CollectionUtils.isNotEmpty(callbacks)) {
+            for (Callback<T> callback : callbacks) {
+                callback.call(response);
+            }
+        }
+    }
+
+    public final void runAsync(Queue<? extends Runnable> tasks) {
+        runAsync(pollAllTask(tasks));
+    }
+
+    public final void runAsync(Collection<? extends Runnable> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return;
+        }
+        List<CompletableFuture<Void>> list = new ArrayList<>(tasks.size());
+        for (Runnable task : tasks) {
+            list.add(CompletableFuture.runAsync(task, enginePool));
+        }
+        executeAsync(list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyAsync(Supplier<T>... tasks) {
+        return supplyAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(Queue<? extends Supplier<T>> tasks) {
+        return supplyAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(Collection<? extends Supplier<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        List<CompletableFuture<T>> list = new ArrayList<>(tasks.size());
+        for (Supplier<T> task : tasks) {
+            list.add(CompletableFuture.supplyAsync(task, enginePool));
+        }
+        return executeAsync(list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyCallableAsync(CallableSupplier<T>... tasks) {
+        return supplyCallableAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(Queue<? extends CallableSupplier<T>> tasks) {
+        return supplyCallableAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(Collection<? extends CallableSupplier<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        Map<CallableSupplier<T>, CompletableFuture<T>> map = new HashMap<>(tasks.size());
+        for (CallableSupplier<T> task : tasks) {
+            map.put(task, CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<CallableSupplier<T>, T> result = executeKeyedAsync(map);
+        for (Map.Entry<CallableSupplier<T>, T> entry : result.entrySet()) {
+            doCallback(entry.getKey(), entry.getValue());
+        }
+        return UtilAll.newArrayList(result.values());
+    }
+
+    @SafeVarargs
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(KeyedCallableSupplier<K, V>... tasks) {
+        return supplyKeyedCallableAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(Queue<? extends KeyedCallableSupplier<K, V>> tasks) {
+        return supplyKeyedCallableAsync(pollAllTask(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(Collection<? extends KeyedCallableSupplier<K, V>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new HashMap<>();
+        }
+        Map<K, CompletableFuture<V>> map = new HashMap<>(tasks.size());
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            map.put(task.key(), CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<K, V> result = executeKeyedAsync(map);
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            K key = task.key();
+            V response = result.get(key);
+            doCallback(task, response);
+        }
+        return result;
+    }
+
+    @SafeVarargs
+    public final <T> List<T> executeAsync(CompletableFuture<T>... tasks) {
+        return executeAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> executeAsync(Queue<CompletableFuture<T>> tasks) {
+        return executeAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> executeAsync(Collection<CompletableFuture<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks)) {
+            return new ArrayList<>();
+        }
+        try {
+            CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0])).join();
+        } catch (Exception e) {
+            log.error("tasks executeAsync failed with exception:{}", e.getMessage(), e);
+            e.printStackTrace();
+        }
+        return getResultIgnoreException(tasks);
+    }
+
+    public final <T> List<T> getResultIgnoreException(Collection<CompletableFuture<T>> tasks) {
+        List<T> result = new ArrayList<>(tasks.size());
+        for (CompletableFuture<T> completableFuture : tasks) {
+            if (null == completableFuture) {
+                continue;
+            }
+            try {
+                T response = completableFuture.get();
+                if (null != response) {
+                    result.add(response);
+                }
+            } catch (Exception e) {
+                log.error("task:{} execute failed with exception:{}", completableFuture, e.getMessage(), e);
+            }
+        }
+        return result;
+    }
+
+    public final void runAsync(long timeout, TimeUnit unit, Runnable... tasks) {
+        runAsync(timeout, unit, UtilAll.newArrayList(tasks));
+    }
+
+    public final void runAsync(long timeout, TimeUnit unit, Queue<? extends Runnable> tasks) {
+        runAsync(timeout, unit, pollAllTask(tasks));
+    }
+
+    public final void runAsync(long timeout, TimeUnit unit, Collection<? extends Runnable> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return;
+        }
+        List<CompletableFuture<Void>> list = new ArrayList<>(tasks.size());
+        for (Runnable task : tasks) {
+            list.add(CompletableFuture.runAsync(task, enginePool));
+        }
+        executeAsync(timeout, unit, list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyAsync(long timeout, TimeUnit unit, Supplier<T>... tasks) {
+        return supplyAsync(timeout, unit, UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(long timeout, TimeUnit unit, Queue<? extends Supplier<T>> tasks) {
+        return supplyAsync(timeout, unit, pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(long timeout, TimeUnit unit, Collection<? extends Supplier<T>> tasks) {
+        if (null == tasks || tasks.size() == 0 || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        List<CompletableFuture<T>> list = new ArrayList<>(tasks.size());
+        for (Supplier<T> task : tasks) {
+            list.add(CompletableFuture.supplyAsync(task, enginePool));
+        }
+        return executeAsync(timeout, unit, list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyCallableAsync(long timeout, TimeUnit unit, CallableSupplier<T>... tasks) {
+        return supplyCallableAsync(timeout, unit, UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(long timeout, TimeUnit unit,
+        Queue<? extends CallableSupplier<T>> tasks) {
+        return supplyCallableAsync(timeout, unit, pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(long timeout, TimeUnit unit,
+        Collection<? extends CallableSupplier<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        Map<CallableSupplier<T>, CompletableFuture<T>> map = new HashMap<>(tasks.size());
+        for (CallableSupplier<T> task : tasks) {
+            map.put(task, CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<CallableSupplier<T>, T> result = executeKeyedAsync(map, timeout, unit);
+        for (Map.Entry<CallableSupplier<T>, T> entry : result.entrySet()) {
+            doCallback(entry.getKey(), entry.getValue());
+        }
+        return UtilAll.newArrayList(result.values());
+    }
+
+    @SafeVarargs
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(long timeout, TimeUnit unit,
+        KeyedCallableSupplier<K, V>... tasks) {
+        return supplyKeyedCallableAsync(timeout, unit, UtilAll.newArrayList(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(long timeout, TimeUnit unit,
+        Queue<? extends KeyedCallableSupplier<K, V>> tasks) {
+        return supplyKeyedCallableAsync(timeout, unit, pollAllTask(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(long timeout, TimeUnit unit,
+        Collection<? extends KeyedCallableSupplier<K, V>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new HashMap<>();
+        }
+        Map<K, CompletableFuture<V>> map = new HashMap<>(tasks.size());
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            map.put(task.key(), CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<K, V> result = executeKeyedAsync(map, timeout, unit);
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            K key = task.key();
+            V response = result.get(key);
+            doCallback(task, response);
+        }
+        return result;
+    }
+
+    @SafeVarargs
+    public final <T> List<T> executeAsync(long timeout, TimeUnit unit, CompletableFuture<T>... tasks) {
+        return executeAsync(timeout, unit, UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> executeAsync(long timeout, TimeUnit unit, Queue<CompletableFuture<T>> tasks) {
+        return executeAsync(timeout, unit, pollAllTask(tasks));
+    }
+
+    public final <T> List<T> executeAsync(long timeout, TimeUnit unit, Collection<CompletableFuture<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks)) {
+            return new ArrayList<>();
+        }
+        try {
+            CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0])).join();
+        } catch (Exception e) {
+            log.error("tasks executeAsync failed with exception:{}", e.getMessage(), e);
+            e.printStackTrace();
+        }
+        return getResultIgnoreException(tasks, timeout, unit);
+    }
+
+    public static <T> List<T> getResultIgnoreException(Collection<CompletableFuture<T>> tasks, long timeout,
+        TimeUnit unit) {
+        List<T> result = new ArrayList<>(tasks.size());
+        for (CompletableFuture<T> completableFuture : tasks) {
+            if (null == completableFuture) {
+                continue;
+            }
+            try {
+                T response = completableFuture.get(timeout, unit);
+                if (null != response) {
+                    result.add(response);
+                }
+            } catch (Exception e) {
+                log.error("task:{} execute failed with exception:{}", completableFuture, e.getMessage(), e);
+            }
+        }
+        return result;
+    }
+
+    @SafeVarargs
+    public final <K, V> Map<K, V> supplyKeyedAsync(KeyedSupplier<K, V>... tasks) {
+        return supplyKeyedAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedAsync(Queue<? extends KeyedSupplier<K, V>> tasks) {
+        return supplyKeyedAsync(pollAllTask(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedAsync(Collection<? extends KeyedSupplier<K, V>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new HashMap<>(0);
+        }
+        Map<K, CompletableFuture<V>> map = new HashMap<>(tasks.size());
+        for (KeyedSupplier<K, V> task : tasks) {
+            map.put(task.key(), CompletableFuture.supplyAsync(task, enginePool));
+        }
+        return executeKeyedAsync(map);
+    }
+
+    public static <K, V> Map<K, V> executeKeyedAsync(Map<K, CompletableFuture<V>> tasks) {
+        if (MapUtils.isEmpty(tasks)) {
+            return new HashMap<>(0);
+        }
+        try {
+            CompletableFuture.allOf(tasks.values().toArray(new CompletableFuture[0])).join();
+        } catch (Exception e) {
+            log.error("tasks executeAsync failed with exception:{}", e.getMessage(), e);
+            e.printStackTrace();
+        }
+        return getKeyedResultIgnoreException(tasks);
+    }
+
+    public static <K, V> Map<K, V> getKeyedResultIgnoreException(Map<K, CompletableFuture<V>> tasks) {
+        Map<K, V> result = new HashMap<>(tasks.size());
+        for (Map.Entry<K, CompletableFuture<V>> entry : tasks.entrySet()) {
+            K key = entry.getKey();
+            CompletableFuture<V> value = entry.getValue();
+            if (null == value) {
+                continue;
+            }
+            try {
+                V response = value.get();
+                if (null != response) {
+                    result.put(key, response);
+                }
+            } catch (Exception e) {
+                log.error("task with key:{} execute failed with exception:{}", key, e.getMessage(), e);
+            }
+        }
+        return result;

Review comment:
       ditto

##########
File path: client/src/main/java/org/apache/rocketmq/client/consumer/store/StageOffsetSerializeWrapper.java
##########
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.consumer.store;
+
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;

Review comment:
       unused import

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            groupByStrategy = currentStageOffsetMap.get(topic);
+        }
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());
+        if (null == groups) {
+            groups = groupByStrategy.get(strategyId);
+        }
+        groups.putIfAbsent(groupId, new AtomicInteger(0));

Review comment:
       ditto

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {

Review comment:
       `ConsumeMessageStagedConcurrentlyService` is unnecessary.

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            groupByStrategy = currentStageOffsetMap.get(topic);
+        }
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());
+        if (null == groups) {
+            groups = groupByStrategy.get(strategyId);
+        }
+        groups.putIfAbsent(groupId, new AtomicInteger(0));
+        return groups.get(groupId);
+    }
+
+    private ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> convert(
+        Map<String, Map<String, Integer>> original) {
+        if (null == original) {
+            return new ConcurrentHashMap<>();
+        }
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> map = new ConcurrentHashMap<>(original.size());
+        for (Map.Entry<String, Map<String, Integer>> entry : original.entrySet()) {
+            String strategy = entry.getKey();
+            ConcurrentMap<String, AtomicInteger> temp = new ConcurrentHashMap<>();
+            Map<String, Integer> groups = entry.getValue();
+            for (Map.Entry<String, Integer> innerEntry : groups.entrySet()) {
+                String key = innerEntry.getKey();
+                Integer value = innerEntry.getValue();
+                temp.put(key, new AtomicInteger(value));
+            }
+            map.put(strategy, temp);
+        }
+        return map;
+    }
+
+    public int getCurrentLeftoverStage(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (Integer stageDefinition : summedStageDefinition) {
+                int left = stageDefinition - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return left;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndex(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (int i = 0; i < summedStageDefinition.size(); i++) {
+                int left = summedStageDefinition.get(i) - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return i;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndexAndUpdate(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId, int delta) {
+        final AtomicInteger offset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        synchronized (offset) {
+            try {
+                return getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId);
+            } finally {
+                offset.getAndAdd(delta);
+            }
+        }
+    }
+
+    @Override
+    public void updateCorePoolSize(int corePoolSize) {
+        if (corePoolSize > 0
+            && corePoolSize <= Short.MAX_VALUE
+            && corePoolSize < this.defaultMQPushConsumer.getConsumeThreadMax()) {
+            this.consumeExecutor.setCorePoolSize(corePoolSize);
+        }
+    }
+
+    @Override
+    public void incCorePoolSize() {
+    }
+
+    @Override
+    public void decCorePoolSize() {
+    }
+
+    @Override
+    public int getCorePoolSize() {
+        return this.consumeExecutor.getCorePoolSize();
+    }
+
+    @Override
+    public ConsumeMessageDirectlyResult consumeMessageDirectly(MessageExt msg, String brokerName) {
+        ConsumeMessageDirectlyResult result = new ConsumeMessageDirectlyResult();
+        result.setOrder(true);
+
+        String topic = msg.getTopic();
+        List<MessageExt> msgs = new ArrayList<MessageExt>();
+        msgs.add(msg);
+        MessageQueue mq = new MessageQueue();
+        mq.setBrokerName(brokerName);
+        mq.setTopic(topic);
+        mq.setQueueId(msg.getQueueId());
+
+        ConsumeStagedConcurrentlyContext context = new ConsumeStagedConcurrentlyContext(mq);
+
+        this.defaultMQPushConsumerImpl.resetRetryAndNamespace(msgs, this.consumerGroup);
+
+        final long beginTime = System.currentTimeMillis();
+
+        log.info("consumeMessageDirectly receive new message: {}", msg);
+
+        Set<MessageQueue> topicSubscribeInfo = this.defaultMQPushConsumerImpl.getRebalanceImpl().getTopicSubscribeInfo(topic);
+        MessageQueue messageQueue = null;
+        if (CollectionUtils.isNotEmpty(topicSubscribeInfo)) {
+            for (MessageQueue queue : topicSubscribeInfo) {
+                if (queue.getQueueId() == msg.getQueueId()) {
+                    messageQueue = queue;
+                    break;
+                }
+            }
+        }
+
+        try {
+            String strategyId = NULL;
+            try {
+                strategyId = String.valueOf(this.messageListener.computeStrategy(msg));
+            } catch (Exception e) {
+                log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+            }
+            String groupId = NULL;
+            try {
+                groupId = String.valueOf(this.messageListener.computeGroup(msg));
+            } catch (Exception e) {
+                log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+            }
+            context.setStrategyId(strategyId);
+            context.setGroupId(groupId);
+            //the test message should not update the stage offset
+            context.setStageIndex(getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId));
+            ConsumeOrderlyStatus status = this.messageListener.consumeMessage(msgs, context);
+            if (status != null) {
+                switch (status) {
+                    case COMMIT:
+                        result.setConsumeResult(CMResult.CR_COMMIT);
+                        break;
+                    case ROLLBACK:
+                        result.setConsumeResult(CMResult.CR_ROLLBACK);
+                        break;
+                    case SUCCESS:
+                        result.setConsumeResult(CMResult.CR_SUCCESS);
+                        break;
+                    case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                        result.setConsumeResult(CMResult.CR_LATER);
+                        break;
+                    default:
+                        break;
+                }
+            } else {
+                result.setConsumeResult(CMResult.CR_RETURN_NULL);
+            }
+            AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+            synchronized (currentStageOffset) {
+                int original = currentStageOffset.get();
+                this.messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                currentStageOffset.set(original);
+            }
+        } catch (Throwable e) {
+            result.setConsumeResult(CMResult.CR_THROW_EXCEPTION);
+            result.setRemark(RemotingHelper.exceptionSimpleDesc(e));
+
+            log.warn(String.format("consumeMessageDirectly exception: %s Group: %s Msgs: %s MQ: %s",
+                RemotingHelper.exceptionSimpleDesc(e),
+                ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                msgs,
+                mq), e);
+        }
+        result.setAutoCommit(context.isAutoCommit());
+        result.setSpentTimeMills(System.currentTimeMillis() - beginTime);
+
+        log.info("consumeMessageDirectly Result: {}", result);
+
+        return result;
+    }
+
+    @Override
+    public void submitConsumeRequest(
+        final List<MessageExt> msgs,
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final boolean dispatchToConsume) {
+        if (dispatchToConsume) {
+            DispatchRequest dispatchRequest = new DispatchRequest(processQueue, messageQueue);
+            this.dispatchExecutor.submit(dispatchRequest);
+        }
+    }
+
+    public synchronized void lockMQPeriodically() {
+        if (!this.stopped) {
+            this.defaultMQPushConsumerImpl.getRebalanceImpl().lockAll();
+        }
+    }
+
+    public void tryLockLaterAndReconsume(final MessageQueue mq, final ProcessQueue processQueue,
+        final long delayMills) {
+        this.scheduledExecutorService.schedule(new Runnable() {
+            @Override
+            public void run() {
+                boolean lockOK = ConsumeMessageStagedConcurrentlyService.this.lockOneMQ(mq);
+                if (lockOK) {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 10);
+                } else {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 3000);
+                }
+            }
+        }, delayMills, TimeUnit.MILLISECONDS);
+    }
+
+    public synchronized boolean lockOneMQ(final MessageQueue mq) {
+        if (!this.stopped) {
+            return this.defaultMQPushConsumerImpl.getRebalanceImpl().lock(mq);
+        }
+
+        return false;
+    }
+
+    private void submitConsumeRequestLater(
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final long suspendTimeMillis
+    ) {
+        long timeMillis = suspendTimeMillis;
+        if (timeMillis == -1) {
+            timeMillis = this.defaultMQPushConsumer.getSuspendCurrentQueueTimeMillis();
+        }
+
+        if (timeMillis < 10) {
+            timeMillis = 10;
+        } else if (timeMillis > 30000) {
+            timeMillis = 30000;
+        }
+
+        this.scheduledExecutorService.schedule(new Runnable() {
+
+            @Override
+            public void run() {
+                ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequest(null, processQueue, messageQueue, true);
+            }
+        }, timeMillis, TimeUnit.MILLISECONDS);
+    }
+
+    public boolean processConsumeResult(
+        final String strategyId,
+        final String groupId,
+        final List<MessageExt> msgs,
+        final ConsumeOrderlyStatus status,
+        final ConsumeStagedConcurrentlyContext context,
+        final ConsumeRequest consumeRequest
+    ) {
+        MessageQueue messageQueue = consumeRequest.getMessageQueue();
+        String topic = messageQueue.getTopic();
+        AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        boolean continueConsume = true;
+        long commitOffset = -1L;
+        int commitStageOffset = -1;
+        if (context.isAutoCommit()) {
+            switch (status) {
+                case COMMIT:
+                case ROLLBACK:
+                    log.warn("the message queue consume result is illegal, we think you want to ack these message {}",
+                        messageQueue);
+                case SUCCESS:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    } else {
+                        commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                        commitStageOffset = currentStageOffset.get();
+                    }
+                    break;
+                default:
+                    break;
+            }
+        } else {
+            switch (status) {
+                case SUCCESS:
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case COMMIT:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    break;
+                case ROLLBACK:
+                    consumeRequest.getProcessQueue().rollback();
+                    this.submitConsumeRequestLater(
+                        consumeRequest.getProcessQueue(),
+                        messageQueue,
+                        context.getSuspendCurrentQueueTimeMillis());
+                    continueConsume = false;
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    }
+                    break;
+                default:
+                    break;
+            }
+        }
+
+        if (commitOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(messageQueue, commitOffset, false);
+        }
+
+        if (stageOffsetStore != null && commitStageOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            synchronized (currentStageOffset) {
+                messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                //prevent users from resetting the value of currentStageOffset to a value less than 0
+                currentStageOffset.set(Math.max(0, currentStageOffset.get()));
+            }
+            commitStageOffset = currentStageOffset.get();
+            if (!consumeRequest.getProcessQueue().isDropped()) {
+                stageOffsetStore.updateStageOffset(messageQueue, strategyId, groupId, commitStageOffset, false);
+            }
+        }
+
+        return continueConsume;
+    }
+
+    public ConsumerStatsManager getConsumerStatsManager() {
+        return this.defaultMQPushConsumerImpl.getConsumerStatsManager();
+    }
+
+    private int getMaxReconsumeTimes() {
+        // default reconsume times: Integer.MAX_VALUE
+        if (this.defaultMQPushConsumer.getMaxReconsumeTimes() == -1) {
+            return Integer.MAX_VALUE;
+        } else {
+            return this.defaultMQPushConsumer.getMaxReconsumeTimes();
+        }
+    }
+
+    private boolean checkReconsumeTimes(List<MessageExt> msgs) {
+        boolean suspend = false;
+        if (msgs != null && !msgs.isEmpty()) {
+            for (MessageExt msg : msgs) {
+                if (msg.getReconsumeTimes() >= getMaxReconsumeTimes()) {
+                    MessageAccessor.setReconsumeTime(msg, String.valueOf(msg.getReconsumeTimes()));
+                    if (!sendMessageBack(msg)) {
+                        suspend = true;
+                        msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                    }
+                } else {
+                    suspend = true;
+                    msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                }
+            }
+        }
+        return suspend;
+    }
+
+    public boolean sendMessageBack(final MessageExt msg) {
+        try {
+            // max reconsume times exceeded then send to dead letter queue.
+            Message newMsg = new Message(MixAll.getRetryTopic(this.defaultMQPushConsumer.getConsumerGroup()), msg.getBody());
+            String originMsgId = MessageAccessor.getOriginMessageId(msg);
+            MessageAccessor.setOriginMessageId(newMsg, UtilAll.isBlank(originMsgId) ? msg.getMsgId() : originMsgId);
+            newMsg.setFlag(msg.getFlag());
+            MessageAccessor.setProperties(newMsg, msg.getProperties());
+            MessageAccessor.putProperty(newMsg, MessageConst.PROPERTY_RETRY_TOPIC, msg.getTopic());
+            MessageAccessor.setReconsumeTime(newMsg, String.valueOf(msg.getReconsumeTimes()));
+            MessageAccessor.setMaxReconsumeTimes(newMsg, String.valueOf(getMaxReconsumeTimes()));
+            MessageAccessor.clearProperty(newMsg, MessageConst.PROPERTY_TRANSACTION_PREPARED);
+            newMsg.setDelayTimeLevel(3 + msg.getReconsumeTimes());
+
+            this.defaultMQPushConsumer.getDefaultMQPushConsumerImpl().getmQClientFactory().getDefaultMQProducer().send(newMsg);
+            return true;
+        } catch (Exception e) {
+            log.error("sendMessageBack exception, group: " + this.consumerGroup + " msg: " + msg.toString(), e);
+        }
+
+        return false;
+    }
+
+    public void resetNamespace(final List<MessageExt> msgs) {
+        for (MessageExt msg : msgs) {
+            if (StringUtils.isNotEmpty(this.defaultMQPushConsumer.getNamespace())) {
+                msg.setTopic(NamespaceUtil.withoutNamespace(msg.getTopic(), this.defaultMQPushConsumer.getNamespace()));
+            }
+        }
+    }
+
+    class DispatchRequest implements Runnable {
+        private final ProcessQueue processQueue;
+        private final MessageQueue messageQueue;
+
+        public DispatchRequest(ProcessQueue processQueue,
+            MessageQueue messageQueue) {
+            this.processQueue = processQueue;
+            this.messageQueue = messageQueue;
+        }
+
+        @Override
+        public void run() {
+            if (this.processQueue.isDropped()) {
+                log.warn("run, the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                return;
+            }
+
+            String topic = this.messageQueue.getTopic();
+            final Object objLock = messageQueueLock.fetchLockObject(this.messageQueue);
+            synchronized (objLock) {
+                if (MessageModel.BROADCASTING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                    || (this.processQueue.isLocked() && !this.processQueue.isLockExpired())) {
+                    final long beginTime = System.currentTimeMillis();
+                    for (final AtomicBoolean continueConsume = new AtomicBoolean(true); continueConsume.get(); ) {
+                        if (this.processQueue.isDropped()) {
+                            log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && !this.processQueue.isLocked()) {
+                            log.warn("the message queue not locked, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && this.processQueue.isLockExpired()) {
+                            log.warn("the message queue lock expired, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        long interval = System.currentTimeMillis() - beginTime;
+                        if (interval > MAX_TIME_CONSUME_CONTINUOUSLY) {
+                            ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, messageQueue, 10);
+                            break;
+                        }
+
+                        final int consumeBatchSize =
+                            ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
+                        int takeSize = ConsumeMessageStagedConcurrentlyService.this.pullBatchSize * consumeBatchSize;
+                        List<MessageExt> msgs = this.processQueue.takeMessages(takeSize);
+                        if (!msgs.isEmpty()) {
+                            //ensure that the stage definitions is up to date
+                            ConsumeMessageStagedConcurrentlyService.this.refreshStageDefinition();
+                            Map<String, Map<String, List<MessageExt>>> messageGroupByStrategyThenGroup = removeAndRePutAllMessagesInTheNextStage(topic, msgs);
+                            for (Map.Entry<String, Map<String, List<MessageExt>>> entry : messageGroupByStrategyThenGroup.entrySet()) {
+                                String strategyId = entry.getKey();
+                                Map<String, List<MessageExt>> messageGroups = entry.getValue();
+                                for (Map.Entry<String, List<MessageExt>> innerEntry : messageGroups.entrySet()) {
+                                    String groupId = innerEntry.getKey();
+                                    List<MessageExt> messagesCanConsume = innerEntry.getValue();
+                                    List<List<MessageExt>> lists = UtilAll.partition(messagesCanConsume, consumeBatchSize);
+                                    for (final List<MessageExt> list : lists) {
+                                        defaultMQPushConsumerImpl.resetRetryAndNamespace(list, defaultMQPushConsumer.getConsumerGroup());
+                                        int currentLeftoverStageIndex =
+                                            ConsumeMessageStagedConcurrentlyService.this.getCurrentLeftoverStageIndexAndUpdate(this.messageQueue, topic, strategyId, groupId, list.size());
+                                        ConsumeRequest consumeRequest = new ConsumeRequest(list, this.processQueue, this.messageQueue, continueConsume, currentLeftoverStageIndex, strategyId, groupId);
+                                        if (currentLeftoverStageIndex >= 0) {
+                                            engine.runPriorityAsync(currentLeftoverStageIndex, consumeRequest);
+                                        } else {
+                                            //If the strategy Id is null, it will go in this case
+                                            engine.runPriorityAsync(consumeRequest);
+                                        }
+                                    }
+                                }
+                            }
+                        } else {
+                            continueConsume.set(false);
+                        }
+                    }
+                } else {
+                    if (this.processQueue.isDropped()) {
+                        log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                        return;
+                    }
+
+                    ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 100);
+                }
+            }
+        }
+
+        private Map<String, Map<String, List<MessageExt>>> removeAndRePutAllMessagesInTheNextStage(String topic,
+            List<MessageExt> msgs) {
+            Map<String, Map<String, List<MessageExt>>> messageGroupByStrategyThenGroup = new LinkedHashMap<>();
+            for (MessageExt message : msgs) {
+                String strategyId = NULL;
+                try {
+                    strategyId = String.valueOf(messageListener.computeStrategy(message));
+                } catch (Exception e) {
+                    log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+                }
+                String groupId = NULL;
+                try {
+                    groupId = String.valueOf(messageListener.computeGroup(message));
+                } catch (Exception e) {
+                    log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+                }
+                //null strategy means direct concurrency
+                Map<String, List<MessageExt>> messageGroupByStrategy = messageGroupByStrategyThenGroup.putIfAbsent(strategyId, new LinkedHashMap<>());

Review comment:
       Optimistically assuming that the element does not exist, and using `putIfAbsent` is inefficient

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            groupByStrategy = currentStageOffsetMap.get(topic);
+        }
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());
+        if (null == groups) {
+            groups = groupByStrategy.get(strategyId);
+        }
+        groups.putIfAbsent(groupId, new AtomicInteger(0));
+        return groups.get(groupId);
+    }
+
+    private ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> convert(
+        Map<String, Map<String, Integer>> original) {
+        if (null == original) {
+            return new ConcurrentHashMap<>();
+        }
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> map = new ConcurrentHashMap<>(original.size());
+        for (Map.Entry<String, Map<String, Integer>> entry : original.entrySet()) {
+            String strategy = entry.getKey();
+            ConcurrentMap<String, AtomicInteger> temp = new ConcurrentHashMap<>();
+            Map<String, Integer> groups = entry.getValue();
+            for (Map.Entry<String, Integer> innerEntry : groups.entrySet()) {
+                String key = innerEntry.getKey();
+                Integer value = innerEntry.getValue();
+                temp.put(key, new AtomicInteger(value));
+            }
+            map.put(strategy, temp);
+        }
+        return map;
+    }
+
+    public int getCurrentLeftoverStage(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (Integer stageDefinition : summedStageDefinition) {
+                int left = stageDefinition - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return left;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndex(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (int i = 0; i < summedStageDefinition.size(); i++) {
+                int left = summedStageDefinition.get(i) - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return i;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndexAndUpdate(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId, int delta) {
+        final AtomicInteger offset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        synchronized (offset) {
+            try {
+                return getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId);
+            } finally {
+                offset.getAndAdd(delta);
+            }
+        }
+    }
+
+    @Override
+    public void updateCorePoolSize(int corePoolSize) {
+        if (corePoolSize > 0
+            && corePoolSize <= Short.MAX_VALUE
+            && corePoolSize < this.defaultMQPushConsumer.getConsumeThreadMax()) {
+            this.consumeExecutor.setCorePoolSize(corePoolSize);
+        }
+    }
+
+    @Override
+    public void incCorePoolSize() {
+    }
+
+    @Override
+    public void decCorePoolSize() {
+    }
+
+    @Override
+    public int getCorePoolSize() {
+        return this.consumeExecutor.getCorePoolSize();
+    }
+
+    @Override
+    public ConsumeMessageDirectlyResult consumeMessageDirectly(MessageExt msg, String brokerName) {
+        ConsumeMessageDirectlyResult result = new ConsumeMessageDirectlyResult();
+        result.setOrder(true);
+
+        String topic = msg.getTopic();
+        List<MessageExt> msgs = new ArrayList<MessageExt>();
+        msgs.add(msg);
+        MessageQueue mq = new MessageQueue();
+        mq.setBrokerName(brokerName);
+        mq.setTopic(topic);
+        mq.setQueueId(msg.getQueueId());
+
+        ConsumeStagedConcurrentlyContext context = new ConsumeStagedConcurrentlyContext(mq);
+
+        this.defaultMQPushConsumerImpl.resetRetryAndNamespace(msgs, this.consumerGroup);
+
+        final long beginTime = System.currentTimeMillis();
+
+        log.info("consumeMessageDirectly receive new message: {}", msg);
+
+        Set<MessageQueue> topicSubscribeInfo = this.defaultMQPushConsumerImpl.getRebalanceImpl().getTopicSubscribeInfo(topic);
+        MessageQueue messageQueue = null;
+        if (CollectionUtils.isNotEmpty(topicSubscribeInfo)) {
+            for (MessageQueue queue : topicSubscribeInfo) {
+                if (queue.getQueueId() == msg.getQueueId()) {
+                    messageQueue = queue;
+                    break;
+                }
+            }
+        }
+
+        try {
+            String strategyId = NULL;
+            try {
+                strategyId = String.valueOf(this.messageListener.computeStrategy(msg));
+            } catch (Exception e) {
+                log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+            }
+            String groupId = NULL;
+            try {
+                groupId = String.valueOf(this.messageListener.computeGroup(msg));
+            } catch (Exception e) {
+                log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+            }
+            context.setStrategyId(strategyId);
+            context.setGroupId(groupId);
+            //the test message should not update the stage offset
+            context.setStageIndex(getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId));
+            ConsumeOrderlyStatus status = this.messageListener.consumeMessage(msgs, context);
+            if (status != null) {
+                switch (status) {
+                    case COMMIT:
+                        result.setConsumeResult(CMResult.CR_COMMIT);
+                        break;
+                    case ROLLBACK:
+                        result.setConsumeResult(CMResult.CR_ROLLBACK);
+                        break;
+                    case SUCCESS:
+                        result.setConsumeResult(CMResult.CR_SUCCESS);
+                        break;
+                    case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                        result.setConsumeResult(CMResult.CR_LATER);
+                        break;
+                    default:
+                        break;
+                }
+            } else {
+                result.setConsumeResult(CMResult.CR_RETURN_NULL);
+            }
+            AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+            synchronized (currentStageOffset) {
+                int original = currentStageOffset.get();
+                this.messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                currentStageOffset.set(original);
+            }
+        } catch (Throwable e) {
+            result.setConsumeResult(CMResult.CR_THROW_EXCEPTION);
+            result.setRemark(RemotingHelper.exceptionSimpleDesc(e));
+
+            log.warn(String.format("consumeMessageDirectly exception: %s Group: %s Msgs: %s MQ: %s",
+                RemotingHelper.exceptionSimpleDesc(e),
+                ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                msgs,
+                mq), e);
+        }
+        result.setAutoCommit(context.isAutoCommit());
+        result.setSpentTimeMills(System.currentTimeMillis() - beginTime);
+
+        log.info("consumeMessageDirectly Result: {}", result);
+
+        return result;
+    }
+
+    @Override
+    public void submitConsumeRequest(
+        final List<MessageExt> msgs,
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final boolean dispatchToConsume) {
+        if (dispatchToConsume) {
+            DispatchRequest dispatchRequest = new DispatchRequest(processQueue, messageQueue);
+            this.dispatchExecutor.submit(dispatchRequest);
+        }
+    }
+
+    public synchronized void lockMQPeriodically() {
+        if (!this.stopped) {
+            this.defaultMQPushConsumerImpl.getRebalanceImpl().lockAll();
+        }
+    }
+
+    public void tryLockLaterAndReconsume(final MessageQueue mq, final ProcessQueue processQueue,
+        final long delayMills) {
+        this.scheduledExecutorService.schedule(new Runnable() {
+            @Override
+            public void run() {
+                boolean lockOK = ConsumeMessageStagedConcurrentlyService.this.lockOneMQ(mq);
+                if (lockOK) {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 10);
+                } else {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 3000);
+                }
+            }
+        }, delayMills, TimeUnit.MILLISECONDS);
+    }
+
+    public synchronized boolean lockOneMQ(final MessageQueue mq) {
+        if (!this.stopped) {
+            return this.defaultMQPushConsumerImpl.getRebalanceImpl().lock(mq);
+        }
+
+        return false;
+    }
+
+    private void submitConsumeRequestLater(
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final long suspendTimeMillis
+    ) {
+        long timeMillis = suspendTimeMillis;
+        if (timeMillis == -1) {
+            timeMillis = this.defaultMQPushConsumer.getSuspendCurrentQueueTimeMillis();
+        }
+
+        if (timeMillis < 10) {
+            timeMillis = 10;
+        } else if (timeMillis > 30000) {
+            timeMillis = 30000;
+        }
+
+        this.scheduledExecutorService.schedule(new Runnable() {
+
+            @Override
+            public void run() {
+                ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequest(null, processQueue, messageQueue, true);
+            }
+        }, timeMillis, TimeUnit.MILLISECONDS);
+    }
+
+    public boolean processConsumeResult(
+        final String strategyId,
+        final String groupId,
+        final List<MessageExt> msgs,
+        final ConsumeOrderlyStatus status,
+        final ConsumeStagedConcurrentlyContext context,
+        final ConsumeRequest consumeRequest
+    ) {
+        MessageQueue messageQueue = consumeRequest.getMessageQueue();
+        String topic = messageQueue.getTopic();
+        AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        boolean continueConsume = true;
+        long commitOffset = -1L;
+        int commitStageOffset = -1;
+        if (context.isAutoCommit()) {
+            switch (status) {
+                case COMMIT:
+                case ROLLBACK:
+                    log.warn("the message queue consume result is illegal, we think you want to ack these message {}",
+                        messageQueue);
+                case SUCCESS:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    } else {
+                        commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                        commitStageOffset = currentStageOffset.get();
+                    }
+                    break;
+                default:
+                    break;
+            }
+        } else {
+            switch (status) {
+                case SUCCESS:
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case COMMIT:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    break;
+                case ROLLBACK:
+                    consumeRequest.getProcessQueue().rollback();
+                    this.submitConsumeRequestLater(
+                        consumeRequest.getProcessQueue(),
+                        messageQueue,
+                        context.getSuspendCurrentQueueTimeMillis());
+                    continueConsume = false;
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    }
+                    break;
+                default:
+                    break;
+            }
+        }
+
+        if (commitOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(messageQueue, commitOffset, false);
+        }
+
+        if (stageOffsetStore != null && commitStageOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            synchronized (currentStageOffset) {
+                messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                //prevent users from resetting the value of currentStageOffset to a value less than 0
+                currentStageOffset.set(Math.max(0, currentStageOffset.get()));
+            }
+            commitStageOffset = currentStageOffset.get();
+            if (!consumeRequest.getProcessQueue().isDropped()) {
+                stageOffsetStore.updateStageOffset(messageQueue, strategyId, groupId, commitStageOffset, false);
+            }
+        }
+
+        return continueConsume;
+    }
+
+    public ConsumerStatsManager getConsumerStatsManager() {
+        return this.defaultMQPushConsumerImpl.getConsumerStatsManager();
+    }
+
+    private int getMaxReconsumeTimes() {
+        // default reconsume times: Integer.MAX_VALUE
+        if (this.defaultMQPushConsumer.getMaxReconsumeTimes() == -1) {
+            return Integer.MAX_VALUE;
+        } else {
+            return this.defaultMQPushConsumer.getMaxReconsumeTimes();
+        }
+    }
+
+    private boolean checkReconsumeTimes(List<MessageExt> msgs) {
+        boolean suspend = false;
+        if (msgs != null && !msgs.isEmpty()) {
+            for (MessageExt msg : msgs) {
+                if (msg.getReconsumeTimes() >= getMaxReconsumeTimes()) {
+                    MessageAccessor.setReconsumeTime(msg, String.valueOf(msg.getReconsumeTimes()));
+                    if (!sendMessageBack(msg)) {
+                        suspend = true;
+                        msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                    }
+                } else {
+                    suspend = true;
+                    msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                }
+            }
+        }
+        return suspend;
+    }
+
+    public boolean sendMessageBack(final MessageExt msg) {
+        try {
+            // max reconsume times exceeded then send to dead letter queue.
+            Message newMsg = new Message(MixAll.getRetryTopic(this.defaultMQPushConsumer.getConsumerGroup()), msg.getBody());
+            String originMsgId = MessageAccessor.getOriginMessageId(msg);
+            MessageAccessor.setOriginMessageId(newMsg, UtilAll.isBlank(originMsgId) ? msg.getMsgId() : originMsgId);
+            newMsg.setFlag(msg.getFlag());
+            MessageAccessor.setProperties(newMsg, msg.getProperties());
+            MessageAccessor.putProperty(newMsg, MessageConst.PROPERTY_RETRY_TOPIC, msg.getTopic());
+            MessageAccessor.setReconsumeTime(newMsg, String.valueOf(msg.getReconsumeTimes()));
+            MessageAccessor.setMaxReconsumeTimes(newMsg, String.valueOf(getMaxReconsumeTimes()));
+            MessageAccessor.clearProperty(newMsg, MessageConst.PROPERTY_TRANSACTION_PREPARED);
+            newMsg.setDelayTimeLevel(3 + msg.getReconsumeTimes());
+
+            this.defaultMQPushConsumer.getDefaultMQPushConsumerImpl().getmQClientFactory().getDefaultMQProducer().send(newMsg);
+            return true;
+        } catch (Exception e) {
+            log.error("sendMessageBack exception, group: " + this.consumerGroup + " msg: " + msg.toString(), e);
+        }
+
+        return false;
+    }
+
+    public void resetNamespace(final List<MessageExt> msgs) {
+        for (MessageExt msg : msgs) {
+            if (StringUtils.isNotEmpty(this.defaultMQPushConsumer.getNamespace())) {
+                msg.setTopic(NamespaceUtil.withoutNamespace(msg.getTopic(), this.defaultMQPushConsumer.getNamespace()));
+            }
+        }
+    }
+
+    class DispatchRequest implements Runnable {
+        private final ProcessQueue processQueue;
+        private final MessageQueue messageQueue;
+
+        public DispatchRequest(ProcessQueue processQueue,
+            MessageQueue messageQueue) {
+            this.processQueue = processQueue;
+            this.messageQueue = messageQueue;
+        }
+
+        @Override
+        public void run() {
+            if (this.processQueue.isDropped()) {
+                log.warn("run, the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                return;
+            }
+
+            String topic = this.messageQueue.getTopic();
+            final Object objLock = messageQueueLock.fetchLockObject(this.messageQueue);
+            synchronized (objLock) {
+                if (MessageModel.BROADCASTING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                    || (this.processQueue.isLocked() && !this.processQueue.isLockExpired())) {
+                    final long beginTime = System.currentTimeMillis();
+                    for (final AtomicBoolean continueConsume = new AtomicBoolean(true); continueConsume.get(); ) {
+                        if (this.processQueue.isDropped()) {
+                            log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && !this.processQueue.isLocked()) {
+                            log.warn("the message queue not locked, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && this.processQueue.isLockExpired()) {
+                            log.warn("the message queue lock expired, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        long interval = System.currentTimeMillis() - beginTime;
+                        if (interval > MAX_TIME_CONSUME_CONTINUOUSLY) {
+                            ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, messageQueue, 10);
+                            break;
+                        }
+
+                        final int consumeBatchSize =
+                            ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
+                        int takeSize = ConsumeMessageStagedConcurrentlyService.this.pullBatchSize * consumeBatchSize;
+                        List<MessageExt> msgs = this.processQueue.takeMessages(takeSize);
+                        if (!msgs.isEmpty()) {
+                            //ensure that the stage definitions is up to date
+                            ConsumeMessageStagedConcurrentlyService.this.refreshStageDefinition();
+                            Map<String, Map<String, List<MessageExt>>> messageGroupByStrategyThenGroup = removeAndRePutAllMessagesInTheNextStage(topic, msgs);
+                            for (Map.Entry<String, Map<String, List<MessageExt>>> entry : messageGroupByStrategyThenGroup.entrySet()) {
+                                String strategyId = entry.getKey();
+                                Map<String, List<MessageExt>> messageGroups = entry.getValue();
+                                for (Map.Entry<String, List<MessageExt>> innerEntry : messageGroups.entrySet()) {
+                                    String groupId = innerEntry.getKey();
+                                    List<MessageExt> messagesCanConsume = innerEntry.getValue();
+                                    List<List<MessageExt>> lists = UtilAll.partition(messagesCanConsume, consumeBatchSize);
+                                    for (final List<MessageExt> list : lists) {
+                                        defaultMQPushConsumerImpl.resetRetryAndNamespace(list, defaultMQPushConsumer.getConsumerGroup());
+                                        int currentLeftoverStageIndex =
+                                            ConsumeMessageStagedConcurrentlyService.this.getCurrentLeftoverStageIndexAndUpdate(this.messageQueue, topic, strategyId, groupId, list.size());
+                                        ConsumeRequest consumeRequest = new ConsumeRequest(list, this.processQueue, this.messageQueue, continueConsume, currentLeftoverStageIndex, strategyId, groupId);
+                                        if (currentLeftoverStageIndex >= 0) {
+                                            engine.runPriorityAsync(currentLeftoverStageIndex, consumeRequest);
+                                        } else {
+                                            //If the strategy Id is null, it will go in this case
+                                            engine.runPriorityAsync(consumeRequest);
+                                        }
+                                    }
+                                }
+                            }
+                        } else {
+                            continueConsume.set(false);
+                        }
+                    }
+                } else {
+                    if (this.processQueue.isDropped()) {
+                        log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                        return;
+                    }
+
+                    ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 100);
+                }
+            }
+        }
+
+        private Map<String, Map<String, List<MessageExt>>> removeAndRePutAllMessagesInTheNextStage(String topic,
+            List<MessageExt> msgs) {
+            Map<String, Map<String, List<MessageExt>>> messageGroupByStrategyThenGroup = new LinkedHashMap<>();
+            for (MessageExt message : msgs) {
+                String strategyId = NULL;
+                try {
+                    strategyId = String.valueOf(messageListener.computeStrategy(message));
+                } catch (Exception e) {
+                    log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+                }
+                String groupId = NULL;
+                try {
+                    groupId = String.valueOf(messageListener.computeGroup(message));
+                } catch (Exception e) {
+                    log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+                }
+                //null strategy means direct concurrency
+                Map<String, List<MessageExt>> messageGroupByStrategy = messageGroupByStrategyThenGroup.putIfAbsent(strategyId, new LinkedHashMap<>());
+                if (null == messageGroupByStrategy) {
+                    messageGroupByStrategy = messageGroupByStrategyThenGroup.get(strategyId);
+                }
+                List<MessageExt> messages = messageGroupByStrategy.putIfAbsent(groupId, new CopyOnWriteArrayList<>());

Review comment:
       ditto

##########
File path: broker/src/main/java/org/apache/rocketmq/broker/BrokerController.java
##########
@@ -179,6 +181,7 @@ public BrokerController(
         this.nettyClientConfig = nettyClientConfig;
         this.messageStoreConfig = messageStoreConfig;
         this.consumerOffsetManager = new ConsumerOffsetManager(this);
+        this.consumerStageOffsetManager=new ConsumerStageOffsetManager(this);

Review comment:
       need spaces arround `=`.

##########
File path: common/src/main/java/org/apache/rocketmq/common/concurrent/PriorityConcurrentEngine.java
##########
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.common.concurrent;
+
+import java.util.Collection;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ConcurrentNavigableMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutorService;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.rocketmq.common.UtilAll;
+
+public class PriorityConcurrentEngine extends ConcurrentEngine {
+
+    /**
+     * highest priority
+     */
+    public static final Integer MAX_PRIORITY = Integer.MIN_VALUE;
+
+    /**
+     * lowest priority
+     */
+    public static final Integer MIN_PRIORITY = Integer.MAX_VALUE;
+
+    private final StagedConcurrentConsumeService consumeService = new StagedConcurrentConsumeService(this);
+
+    private final ConcurrentNavigableMap<Integer, Queue<Object>> priorityTasks = new ConcurrentSkipListMap<>();
+
+    public PriorityConcurrentEngine() {
+        super();
+    }
+
+    public PriorityConcurrentEngine(ExecutorService enginePool) {
+        super(enginePool);
+    }
+
+    public final void runPriorityAsync(Runnable... tasks) {
+        runPriorityAsync(MIN_PRIORITY, tasks);
+    }
+
+    public final void runPriorityAsync(Queue<Runnable> tasks) {
+        runPriorityAsync(MIN_PRIORITY, tasks);
+    }
+
+    public final void runPriorityAsync(Collection<Runnable> tasks) {
+        runPriorityAsync(MIN_PRIORITY, tasks);
+    }
+
+    public final void runPriorityAsync(Integer priority, Runnable... tasks) {
+        runPriorityAsync(priority, UtilAll.newArrayList(tasks));
+    }
+
+    public final void runPriorityAsync(Integer priority, Queue<? extends Runnable> tasks) {
+        runPriorityAsync(priority, pollAllTask(tasks));
+    }
+
+    public final void runPriorityAsync(Integer priority, Collection<? extends Runnable> tasks) {
+        if (CollectionUtils.isEmpty(tasks)) {
+            return;
+        }
+        Queue<Object> queue = priorityTasks.putIfAbsent(priority, new ConcurrentLinkedQueue<>());

Review comment:
       call `get` first, and call `pullIfAbsent` if return `null`

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            groupByStrategy = currentStageOffsetMap.get(topic);
+        }
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());

Review comment:
       do not use `putIfAbsent`

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {

Review comment:
       because `start` if member of `ConsumeMessageStagedConcurrentlyService`

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            groupByStrategy = currentStageOffsetMap.get(topic);
+        }
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());
+        if (null == groups) {
+            groups = groupByStrategy.get(strategyId);
+        }
+        groups.putIfAbsent(groupId, new AtomicInteger(0));
+        return groups.get(groupId);
+    }
+
+    private ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> convert(
+        Map<String, Map<String, Integer>> original) {
+        if (null == original) {
+            return new ConcurrentHashMap<>();
+        }
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> map = new ConcurrentHashMap<>(original.size());
+        for (Map.Entry<String, Map<String, Integer>> entry : original.entrySet()) {
+            String strategy = entry.getKey();
+            ConcurrentMap<String, AtomicInteger> temp = new ConcurrentHashMap<>();
+            Map<String, Integer> groups = entry.getValue();
+            for (Map.Entry<String, Integer> innerEntry : groups.entrySet()) {
+                String key = innerEntry.getKey();
+                Integer value = innerEntry.getValue();
+                temp.put(key, new AtomicInteger(value));
+            }
+            map.put(strategy, temp);
+        }
+        return map;
+    }
+
+    public int getCurrentLeftoverStage(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (Integer stageDefinition : summedStageDefinition) {
+                int left = stageDefinition - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return left;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndex(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (int i = 0; i < summedStageDefinition.size(); i++) {
+                int left = summedStageDefinition.get(i) - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return i;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndexAndUpdate(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId, int delta) {
+        final AtomicInteger offset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        synchronized (offset) {
+            try {
+                return getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId);
+            } finally {
+                offset.getAndAdd(delta);
+            }
+        }
+    }
+
+    @Override
+    public void updateCorePoolSize(int corePoolSize) {
+        if (corePoolSize > 0
+            && corePoolSize <= Short.MAX_VALUE
+            && corePoolSize < this.defaultMQPushConsumer.getConsumeThreadMax()) {
+            this.consumeExecutor.setCorePoolSize(corePoolSize);
+        }
+    }
+
+    @Override
+    public void incCorePoolSize() {
+    }
+
+    @Override
+    public void decCorePoolSize() {
+    }
+
+    @Override
+    public int getCorePoolSize() {
+        return this.consumeExecutor.getCorePoolSize();
+    }
+
+    @Override
+    public ConsumeMessageDirectlyResult consumeMessageDirectly(MessageExt msg, String brokerName) {
+        ConsumeMessageDirectlyResult result = new ConsumeMessageDirectlyResult();
+        result.setOrder(true);
+
+        String topic = msg.getTopic();
+        List<MessageExt> msgs = new ArrayList<MessageExt>();
+        msgs.add(msg);
+        MessageQueue mq = new MessageQueue();
+        mq.setBrokerName(brokerName);
+        mq.setTopic(topic);
+        mq.setQueueId(msg.getQueueId());
+
+        ConsumeStagedConcurrentlyContext context = new ConsumeStagedConcurrentlyContext(mq);
+
+        this.defaultMQPushConsumerImpl.resetRetryAndNamespace(msgs, this.consumerGroup);
+
+        final long beginTime = System.currentTimeMillis();
+
+        log.info("consumeMessageDirectly receive new message: {}", msg);
+
+        Set<MessageQueue> topicSubscribeInfo = this.defaultMQPushConsumerImpl.getRebalanceImpl().getTopicSubscribeInfo(topic);
+        MessageQueue messageQueue = null;
+        if (CollectionUtils.isNotEmpty(topicSubscribeInfo)) {
+            for (MessageQueue queue : topicSubscribeInfo) {
+                if (queue.getQueueId() == msg.getQueueId()) {
+                    messageQueue = queue;
+                    break;
+                }
+            }
+        }
+
+        try {
+            String strategyId = NULL;
+            try {
+                strategyId = String.valueOf(this.messageListener.computeStrategy(msg));
+            } catch (Exception e) {
+                log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+            }
+            String groupId = NULL;
+            try {
+                groupId = String.valueOf(this.messageListener.computeGroup(msg));
+            } catch (Exception e) {
+                log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+            }
+            context.setStrategyId(strategyId);
+            context.setGroupId(groupId);
+            //the test message should not update the stage offset
+            context.setStageIndex(getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId));
+            ConsumeOrderlyStatus status = this.messageListener.consumeMessage(msgs, context);
+            if (status != null) {
+                switch (status) {
+                    case COMMIT:
+                        result.setConsumeResult(CMResult.CR_COMMIT);
+                        break;
+                    case ROLLBACK:
+                        result.setConsumeResult(CMResult.CR_ROLLBACK);
+                        break;
+                    case SUCCESS:
+                        result.setConsumeResult(CMResult.CR_SUCCESS);
+                        break;
+                    case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                        result.setConsumeResult(CMResult.CR_LATER);
+                        break;
+                    default:
+                        break;
+                }
+            } else {
+                result.setConsumeResult(CMResult.CR_RETURN_NULL);
+            }
+            AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+            synchronized (currentStageOffset) {
+                int original = currentStageOffset.get();
+                this.messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                currentStageOffset.set(original);
+            }
+        } catch (Throwable e) {
+            result.setConsumeResult(CMResult.CR_THROW_EXCEPTION);
+            result.setRemark(RemotingHelper.exceptionSimpleDesc(e));
+
+            log.warn(String.format("consumeMessageDirectly exception: %s Group: %s Msgs: %s MQ: %s",
+                RemotingHelper.exceptionSimpleDesc(e),
+                ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                msgs,
+                mq), e);
+        }
+        result.setAutoCommit(context.isAutoCommit());
+        result.setSpentTimeMills(System.currentTimeMillis() - beginTime);
+
+        log.info("consumeMessageDirectly Result: {}", result);
+
+        return result;
+    }
+
+    @Override
+    public void submitConsumeRequest(
+        final List<MessageExt> msgs,
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final boolean dispatchToConsume) {
+        if (dispatchToConsume) {
+            DispatchRequest dispatchRequest = new DispatchRequest(processQueue, messageQueue);
+            this.dispatchExecutor.submit(dispatchRequest);
+        }
+    }
+
+    public synchronized void lockMQPeriodically() {
+        if (!this.stopped) {
+            this.defaultMQPushConsumerImpl.getRebalanceImpl().lockAll();
+        }
+    }
+
+    public void tryLockLaterAndReconsume(final MessageQueue mq, final ProcessQueue processQueue,
+        final long delayMills) {
+        this.scheduledExecutorService.schedule(new Runnable() {
+            @Override
+            public void run() {
+                boolean lockOK = ConsumeMessageStagedConcurrentlyService.this.lockOneMQ(mq);
+                if (lockOK) {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 10);
+                } else {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 3000);
+                }
+            }
+        }, delayMills, TimeUnit.MILLISECONDS);
+    }
+
+    public synchronized boolean lockOneMQ(final MessageQueue mq) {
+        if (!this.stopped) {
+            return this.defaultMQPushConsumerImpl.getRebalanceImpl().lock(mq);
+        }
+
+        return false;
+    }
+
+    private void submitConsumeRequestLater(
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final long suspendTimeMillis
+    ) {
+        long timeMillis = suspendTimeMillis;
+        if (timeMillis == -1) {
+            timeMillis = this.defaultMQPushConsumer.getSuspendCurrentQueueTimeMillis();
+        }
+
+        if (timeMillis < 10) {
+            timeMillis = 10;
+        } else if (timeMillis > 30000) {
+            timeMillis = 30000;
+        }
+
+        this.scheduledExecutorService.schedule(new Runnable() {
+
+            @Override
+            public void run() {
+                ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequest(null, processQueue, messageQueue, true);
+            }
+        }, timeMillis, TimeUnit.MILLISECONDS);
+    }
+
+    public boolean processConsumeResult(
+        final String strategyId,
+        final String groupId,
+        final List<MessageExt> msgs,
+        final ConsumeOrderlyStatus status,
+        final ConsumeStagedConcurrentlyContext context,
+        final ConsumeRequest consumeRequest
+    ) {
+        MessageQueue messageQueue = consumeRequest.getMessageQueue();
+        String topic = messageQueue.getTopic();
+        AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        boolean continueConsume = true;
+        long commitOffset = -1L;
+        int commitStageOffset = -1;
+        if (context.isAutoCommit()) {
+            switch (status) {
+                case COMMIT:
+                case ROLLBACK:
+                    log.warn("the message queue consume result is illegal, we think you want to ack these message {}",
+                        messageQueue);
+                case SUCCESS:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    } else {
+                        commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                        commitStageOffset = currentStageOffset.get();
+                    }
+                    break;
+                default:
+                    break;
+            }
+        } else {
+            switch (status) {
+                case SUCCESS:
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case COMMIT:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    break;
+                case ROLLBACK:
+                    consumeRequest.getProcessQueue().rollback();
+                    this.submitConsumeRequestLater(
+                        consumeRequest.getProcessQueue(),
+                        messageQueue,
+                        context.getSuspendCurrentQueueTimeMillis());
+                    continueConsume = false;
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    }
+                    break;
+                default:
+                    break;
+            }
+        }
+
+        if (commitOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(messageQueue, commitOffset, false);
+        }
+
+        if (stageOffsetStore != null && commitStageOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            synchronized (currentStageOffset) {
+                messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                //prevent users from resetting the value of currentStageOffset to a value less than 0
+                currentStageOffset.set(Math.max(0, currentStageOffset.get()));
+            }
+            commitStageOffset = currentStageOffset.get();
+            if (!consumeRequest.getProcessQueue().isDropped()) {
+                stageOffsetStore.updateStageOffset(messageQueue, strategyId, groupId, commitStageOffset, false);
+            }
+        }
+
+        return continueConsume;
+    }
+
+    public ConsumerStatsManager getConsumerStatsManager() {
+        return this.defaultMQPushConsumerImpl.getConsumerStatsManager();
+    }
+
+    private int getMaxReconsumeTimes() {
+        // default reconsume times: Integer.MAX_VALUE
+        if (this.defaultMQPushConsumer.getMaxReconsumeTimes() == -1) {
+            return Integer.MAX_VALUE;
+        } else {
+            return this.defaultMQPushConsumer.getMaxReconsumeTimes();
+        }
+    }
+
+    private boolean checkReconsumeTimes(List<MessageExt> msgs) {
+        boolean suspend = false;
+        if (msgs != null && !msgs.isEmpty()) {
+            for (MessageExt msg : msgs) {
+                if (msg.getReconsumeTimes() >= getMaxReconsumeTimes()) {
+                    MessageAccessor.setReconsumeTime(msg, String.valueOf(msg.getReconsumeTimes()));
+                    if (!sendMessageBack(msg)) {
+                        suspend = true;
+                        msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                    }
+                } else {
+                    suspend = true;
+                    msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                }
+            }
+        }
+        return suspend;
+    }
+
+    public boolean sendMessageBack(final MessageExt msg) {
+        try {
+            // max reconsume times exceeded then send to dead letter queue.
+            Message newMsg = new Message(MixAll.getRetryTopic(this.defaultMQPushConsumer.getConsumerGroup()), msg.getBody());
+            String originMsgId = MessageAccessor.getOriginMessageId(msg);
+            MessageAccessor.setOriginMessageId(newMsg, UtilAll.isBlank(originMsgId) ? msg.getMsgId() : originMsgId);
+            newMsg.setFlag(msg.getFlag());
+            MessageAccessor.setProperties(newMsg, msg.getProperties());
+            MessageAccessor.putProperty(newMsg, MessageConst.PROPERTY_RETRY_TOPIC, msg.getTopic());
+            MessageAccessor.setReconsumeTime(newMsg, String.valueOf(msg.getReconsumeTimes()));
+            MessageAccessor.setMaxReconsumeTimes(newMsg, String.valueOf(getMaxReconsumeTimes()));
+            MessageAccessor.clearProperty(newMsg, MessageConst.PROPERTY_TRANSACTION_PREPARED);
+            newMsg.setDelayTimeLevel(3 + msg.getReconsumeTimes());
+
+            this.defaultMQPushConsumer.getDefaultMQPushConsumerImpl().getmQClientFactory().getDefaultMQProducer().send(newMsg);
+            return true;
+        } catch (Exception e) {
+            log.error("sendMessageBack exception, group: " + this.consumerGroup + " msg: " + msg.toString(), e);
+        }
+
+        return false;
+    }
+
+    public void resetNamespace(final List<MessageExt> msgs) {
+        for (MessageExt msg : msgs) {
+            if (StringUtils.isNotEmpty(this.defaultMQPushConsumer.getNamespace())) {
+                msg.setTopic(NamespaceUtil.withoutNamespace(msg.getTopic(), this.defaultMQPushConsumer.getNamespace()));
+            }
+        }
+    }
+
+    class DispatchRequest implements Runnable {
+        private final ProcessQueue processQueue;
+        private final MessageQueue messageQueue;
+
+        public DispatchRequest(ProcessQueue processQueue,
+            MessageQueue messageQueue) {
+            this.processQueue = processQueue;
+            this.messageQueue = messageQueue;
+        }
+
+        @Override
+        public void run() {
+            if (this.processQueue.isDropped()) {
+                log.warn("run, the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                return;
+            }
+
+            String topic = this.messageQueue.getTopic();
+            final Object objLock = messageQueueLock.fetchLockObject(this.messageQueue);
+            synchronized (objLock) {
+                if (MessageModel.BROADCASTING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                    || (this.processQueue.isLocked() && !this.processQueue.isLockExpired())) {
+                    final long beginTime = System.currentTimeMillis();
+                    for (final AtomicBoolean continueConsume = new AtomicBoolean(true); continueConsume.get(); ) {
+                        if (this.processQueue.isDropped()) {
+                            log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && !this.processQueue.isLocked()) {
+                            log.warn("the message queue not locked, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && this.processQueue.isLockExpired()) {
+                            log.warn("the message queue lock expired, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        long interval = System.currentTimeMillis() - beginTime;
+                        if (interval > MAX_TIME_CONSUME_CONTINUOUSLY) {
+                            ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, messageQueue, 10);
+                            break;
+                        }
+
+                        final int consumeBatchSize =
+                            ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
+                        int takeSize = ConsumeMessageStagedConcurrentlyService.this.pullBatchSize * consumeBatchSize;

Review comment:
       if `pullBatchSize` is 32 and `consumeBatchSize` is 32 too, `takeSize` will be 1024

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);

Review comment:
       How to reflect the difference of offset between different mq?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] dragon-zhang commented on a change in pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
dragon-zhang commented on a change in pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#discussion_r730397740



##########
File path: common/src/main/java/org/apache/rocketmq/common/concurrent/ConcurrentEngine.java
##########
@@ -0,0 +1,463 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.common.concurrent;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Supplier;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.constant.LoggerName;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.logging.InternalLoggerFactory;
+
+public class ConcurrentEngine {
+
+    protected static final InternalLogger log = InternalLoggerFactory.getLogger(LoggerName.COMMON_LOGGER_NAME);
+
+    protected final ExecutorService enginePool;
+
+    public ConcurrentEngine() {
+        this.enginePool = ForkJoinPool.commonPool();
+    }
+
+    public ConcurrentEngine(ExecutorService enginePool) {
+        this.enginePool = enginePool;
+    }
+
+    public final void runAsync(Runnable... tasks) {
+        runAsync(UtilAll.newArrayList(tasks));
+    }
+
+    protected static <E> List<E> pollAllTask(Queue<E> tasks) {
+        //avoid list expansion
+        List<E> list = new LinkedList<>();
+        while (tasks != null && !tasks.isEmpty()) {
+            E task = tasks.poll();
+            list.add(task);
+        }
+        return list;
+    }
+
+    protected static <T> void doCallback(CallableSupplier<T> supplier, T response) {
+        Collection<Callback<T>> callbacks = supplier.getCallbacks();
+        if (CollectionUtils.isNotEmpty(callbacks)) {
+            for (Callback<T> callback : callbacks) {
+                callback.call(response);
+            }
+        }
+    }
+
+    public final void runAsync(Queue<? extends Runnable> tasks) {
+        runAsync(pollAllTask(tasks));
+    }
+
+    public final void runAsync(Collection<? extends Runnable> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return;
+        }
+        List<CompletableFuture<Void>> list = new ArrayList<>(tasks.size());
+        for (Runnable task : tasks) {
+            list.add(CompletableFuture.runAsync(task, enginePool));
+        }
+        executeAsync(list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyAsync(Supplier<T>... tasks) {
+        return supplyAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(Queue<? extends Supplier<T>> tasks) {
+        return supplyAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(Collection<? extends Supplier<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        List<CompletableFuture<T>> list = new ArrayList<>(tasks.size());
+        for (Supplier<T> task : tasks) {
+            list.add(CompletableFuture.supplyAsync(task, enginePool));
+        }
+        return executeAsync(list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyCallableAsync(CallableSupplier<T>... tasks) {
+        return supplyCallableAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(Queue<? extends CallableSupplier<T>> tasks) {
+        return supplyCallableAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(Collection<? extends CallableSupplier<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        Map<CallableSupplier<T>, CompletableFuture<T>> map = new HashMap<>(tasks.size());
+        for (CallableSupplier<T> task : tasks) {
+            map.put(task, CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<CallableSupplier<T>, T> result = executeKeyedAsync(map);
+        for (Map.Entry<CallableSupplier<T>, T> entry : result.entrySet()) {
+            doCallback(entry.getKey(), entry.getValue());
+        }
+        return UtilAll.newArrayList(result.values());
+    }
+
+    @SafeVarargs
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(KeyedCallableSupplier<K, V>... tasks) {
+        return supplyKeyedCallableAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(Queue<? extends KeyedCallableSupplier<K, V>> tasks) {
+        return supplyKeyedCallableAsync(pollAllTask(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(Collection<? extends KeyedCallableSupplier<K, V>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new HashMap<>();
+        }
+        Map<K, CompletableFuture<V>> map = new HashMap<>(tasks.size());
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            map.put(task.key(), CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<K, V> result = executeKeyedAsync(map);
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            K key = task.key();
+            V response = result.get(key);
+            doCallback(task, response);
+        }
+        return result;
+    }
+
+    @SafeVarargs
+    public final <T> List<T> executeAsync(CompletableFuture<T>... tasks) {
+        return executeAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> executeAsync(Queue<CompletableFuture<T>> tasks) {
+        return executeAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> executeAsync(Collection<CompletableFuture<T>> tasks) {

Review comment:
       What do you think is a more reasonable name ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] dragon-zhang commented on a change in pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
dragon-zhang commented on a change in pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#discussion_r730397517



##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {

Review comment:
       Yes, but compatibility with `MessageListenerOrderly` is more risky.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40483886/badge)](https://coveralls.io/builds/40483886)
   
   Coverage decreased (-0.9%) to 53.257% when pulling **5f875eb0f0671a01e6d54a8fa171ec6bdd1d9568 on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **df1d93fc8859377b92ba87c6947911281656f355 on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40437274/badge)](https://coveralls.io/builds/40437274)
   
   Coverage decreased (-0.7%) to 53.294% when pulling **cb5d4de41d6ca821ded474da6f544664e78d5c5f on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **52348b862c0dda897764c3b51fe1436c1a5ae0fe on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479


   # [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#2983](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (af5ec6f) into [develop](https://codecov.io/gh/apache/rocketmq/commit/57c166bc71cfbe4de4a74b80ea0a380d48f6a229?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (57c166b) will **increase** coverage by `0.34%`.
   > The diff coverage is `28.26%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/rocketmq/pull/2983/graphs/tree.svg?width=650&height=150&src=pr&token=4w0sxP1wZv&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   
   ```diff
   @@              Coverage Diff              @@
   ##             develop    #2983      +/-   ##
   =============================================
   + Coverage      47.83%   48.18%   +0.34%     
   + Complexity      4552     3699     -853     
   =============================================
     Files            552      320     -232     
     Lines          36628    30262    -6366     
     Branches        4844     4337     -507     
   =============================================
   - Hits           17521    14581    -2940     
   + Misses         16879    13666    -3213     
   + Partials        2228     2015     -213     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...ocketmq/broker/processor/AdminBrokerProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0FkbWluQnJva2VyUHJvY2Vzc29yLmphdmE=) | `7.93% <0.00%> (-0.03%)` | :arrow_down: |
   | [...etmq/broker/processor/ConsumerManageProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0NvbnN1bWVyTWFuYWdlUHJvY2Vzc29yLmphdmE=) | `4.25% <0.00%> (-1.63%)` | :arrow_down: |
   | [...ocketmq/client/consumer/DefaultMQPushConsumer.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvRGVmYXVsdE1RUHVzaENvbnN1bWVyLmphdmE=) | `53.73% <0.00%> (-0.82%)` | :arrow_down: |
   | [...er/listener/MessageListenerStagedConcurrently.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvbGlzdGVuZXIvTWVzc2FnZUxpc3RlbmVyU3RhZ2VkQ29uY3VycmVudGx5LmphdmE=) | `0.00% <0.00%> (ø)` | |
   | [...g/apache/rocketmq/client/impl/MQClientAPIImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9NUUNsaWVudEFQSUltcGwuamF2YQ==) | `11.97% <0.00%> (-0.22%)` | :arrow_down: |
   | [...he/rocketmq/client/impl/consumer/ProcessQueue.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Qcm9jZXNzUXVldWUuamF2YQ==) | `55.84% <0.00%> (-4.16%)` | :arrow_down: |
   | [...cketmq/client/impl/consumer/RebalancePushImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9SZWJhbGFuY2VQdXNoSW1wbC5qYXZh) | `34.23% <0.00%> (-1.28%)` | :arrow_down: |
   | [...lient/impl/consumer/DefaultMQPushConsumerImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9EZWZhdWx0TVFQdXNoQ29uc3VtZXJJbXBsLmphdmE=) | `39.41% <11.76%> (-0.76%)` | :arrow_down: |
   | [...etmq/broker/offset/ConsumerStageOffsetManager.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvb2Zmc2V0L0NvbnN1bWVyU3RhZ2VPZmZzZXRNYW5hZ2VyLmphdmE=) | `13.11% <13.11%> (ø)` | |
   | [...sumer/ConsumeMessageStagedConcurrentlyService.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Db25zdW1lTWVzc2FnZVN0YWdlZENvbmN1cnJlbnRseVNlcnZpY2UuamF2YQ==) | `16.39% <16.39%> (ø)` | |
   | ... and [265 more](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [57c166b...af5ec6f](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: RIP 22 RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40330556/badge)](https://coveralls.io/builds/40330556)
   
   Coverage decreased (-0.6%) to 53.4% when pulling **32821320b3434c38e1878b4058f0040b0f415216 on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **a1babab507934e81f0e05b2867566c8b459be341 on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479


   # [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#2983](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (d3f7136) into [develop](https://codecov.io/gh/apache/rocketmq/commit/a1babab507934e81f0e05b2867566c8b459be341?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a1babab) will **increase** coverage by `0.51%`.
   > The diff coverage is `29.92%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/rocketmq/pull/2983/graphs/tree.svg?width=650&height=150&src=pr&token=4w0sxP1wZv&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   
   ```diff
   @@              Coverage Diff              @@
   ##             develop    #2983      +/-   ##
   =============================================
   + Coverage      47.81%   48.33%   +0.51%     
   + Complexity      4550     3703     -847     
   =============================================
     Files            552      319     -233     
     Lines          36628    30182    -6446     
     Branches        4844     4323     -521     
   =============================================
   - Hits           17513    14587    -2926     
   + Misses         16885    13584    -3301     
   + Partials        2230     2011     -219     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...ocketmq/broker/processor/AdminBrokerProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0FkbWluQnJva2VyUHJvY2Vzc29yLmphdmE=) | `7.93% <0.00%> (-0.03%)` | :arrow_down: |
   | [...etmq/broker/processor/ConsumerManageProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0NvbnN1bWVyTWFuYWdlUHJvY2Vzc29yLmphdmE=) | `4.25% <0.00%> (-1.63%)` | :arrow_down: |
   | [...ocketmq/client/consumer/DefaultMQPushConsumer.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvRGVmYXVsdE1RUHVzaENvbnN1bWVyLmphdmE=) | `53.73% <0.00%> (-0.82%)` | :arrow_down: |
   | [...er/listener/MessageListenerStagedConcurrently.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvbGlzdGVuZXIvTWVzc2FnZUxpc3RlbmVyU3RhZ2VkQ29uY3VycmVudGx5LmphdmE=) | `0.00% <0.00%> (ø)` | |
   | [...g/apache/rocketmq/client/impl/MQClientAPIImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9NUUNsaWVudEFQSUltcGwuamF2YQ==) | `11.97% <0.00%> (-0.22%)` | :arrow_down: |
   | [...he/rocketmq/client/impl/consumer/ProcessQueue.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Qcm9jZXNzUXVldWUuamF2YQ==) | `57.57% <0.00%> (-1.96%)` | :arrow_down: |
   | [...cketmq/client/impl/consumer/RebalancePushImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9SZWJhbGFuY2VQdXNoSW1wbC5qYXZh) | `34.23% <0.00%> (-1.28%)` | :arrow_down: |
   | [...lient/impl/consumer/DefaultMQPushConsumerImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9EZWZhdWx0TVFQdXNoQ29uc3VtZXJJbXBsLmphdmE=) | `39.41% <11.76%> (-0.76%)` | :arrow_down: |
   | [...etmq/broker/offset/ConsumerStageOffsetManager.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvb2Zmc2V0L0NvbnN1bWVyU3RhZ2VPZmZzZXRNYW5hZ2VyLmphdmE=) | `14.81% <14.81%> (ø)` | |
   | [...sumer/ConsumeMessageStagedConcurrentlyService.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Db25zdW1lTWVzc2FnZVN0YWdlZENvbmN1cnJlbnRseVNlcnZpY2UuamF2YQ==) | `20.32% <20.32%> (ø)` | |
   | ... and [268 more](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [a1babab...d3f7136](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] dragon-zhang commented on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
dragon-zhang commented on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-1005519874


   see https://github.com/apache/rocketmq/pull/3691


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40902338/badge)](https://coveralls.io/builds/40902338)
   
   Coverage decreased (-1.01%) to 53.214% when pulling **760962b4ae59d404d111f7fe7621b4ef9e1c22ba on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **a2f8810c9adedcd82fd4cb9a69b17128a1a96b5e on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479


   # [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#2983](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cb5d4de) into [develop](https://codecov.io/gh/apache/rocketmq/commit/52348b862c0dda897764c3b51fe1436c1a5ae0fe?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (52348b8) will **increase** coverage by `0.33%`.
   > The diff coverage is `28.26%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/rocketmq/pull/2983/graphs/tree.svg?width=650&height=150&src=pr&token=4w0sxP1wZv&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   
   ```diff
   @@              Coverage Diff              @@
   ##             develop    #2983      +/-   ##
   =============================================
   + Coverage      47.91%   48.24%   +0.33%     
   + Complexity      4560     3702     -858     
   =============================================
     Files            552      320     -232     
     Lines          36628    30262    -6366     
     Branches        4844     4337     -507     
   =============================================
   - Hits           17549    14599    -2950     
   + Misses         16857    13653    -3204     
   + Partials        2222     2010     -212     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...ocketmq/broker/processor/AdminBrokerProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0FkbWluQnJva2VyUHJvY2Vzc29yLmphdmE=) | `7.93% <0.00%> (-0.03%)` | :arrow_down: |
   | [...etmq/broker/processor/ConsumerManageProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0NvbnN1bWVyTWFuYWdlUHJvY2Vzc29yLmphdmE=) | `4.25% <0.00%> (-1.63%)` | :arrow_down: |
   | [...ocketmq/client/consumer/DefaultMQPushConsumer.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvRGVmYXVsdE1RUHVzaENvbnN1bWVyLmphdmE=) | `53.73% <0.00%> (-0.82%)` | :arrow_down: |
   | [...er/listener/MessageListenerStagedConcurrently.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvbGlzdGVuZXIvTWVzc2FnZUxpc3RlbmVyU3RhZ2VkQ29uY3VycmVudGx5LmphdmE=) | `0.00% <0.00%> (ø)` | |
   | [...g/apache/rocketmq/client/impl/MQClientAPIImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9NUUNsaWVudEFQSUltcGwuamF2YQ==) | `11.97% <0.00%> (-0.22%)` | :arrow_down: |
   | [...he/rocketmq/client/impl/consumer/ProcessQueue.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Qcm9jZXNzUXVldWUuamF2YQ==) | `55.41% <0.00%> (-4.59%)` | :arrow_down: |
   | [...cketmq/client/impl/consumer/RebalancePushImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9SZWJhbGFuY2VQdXNoSW1wbC5qYXZh) | `34.23% <0.00%> (-1.28%)` | :arrow_down: |
   | [...lient/impl/consumer/DefaultMQPushConsumerImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9EZWZhdWx0TVFQdXNoQ29uc3VtZXJJbXBsLmphdmE=) | `39.41% <11.76%> (-0.76%)` | :arrow_down: |
   | [...etmq/broker/offset/ConsumerStageOffsetManager.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvb2Zmc2V0L0NvbnN1bWVyU3RhZ2VPZmZzZXRNYW5hZ2VyLmphdmE=) | `13.11% <13.11%> (ø)` | |
   | [...sumer/ConsumeMessageStagedConcurrentlyService.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Db25zdW1lTWVzc2FnZVN0YWdlZENvbmN1cnJlbnRseVNlcnZpY2UuamF2YQ==) | `16.39% <16.39%> (ø)` | |
   | ... and [263 more](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [52348b8...cb5d4de](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479


   # [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#2983](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (817addd) into [develop](https://codecov.io/gh/apache/rocketmq/commit/df1d93fc8859377b92ba87c6947911281656f355?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (df1d93f) will **increase** coverage by `0.56%`.
   > The diff coverage is `29.21%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/rocketmq/pull/2983/graphs/tree.svg?width=650&height=150&src=pr&token=4w0sxP1wZv&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   
   ```diff
   @@              Coverage Diff              @@
   ##             develop    #2983      +/-   ##
   =============================================
   + Coverage      47.78%   48.34%   +0.56%     
   + Complexity      4549     3711     -838     
   =============================================
     Files            552      320     -232     
     Lines          36628    30269    -6359     
     Branches        4844     4337     -507     
   =============================================
   - Hits           17501    14635    -2866     
   + Misses         16901    13622    -3279     
   + Partials        2226     2012     -214     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...ocketmq/broker/processor/AdminBrokerProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0FkbWluQnJva2VyUHJvY2Vzc29yLmphdmE=) | `7.93% <0.00%> (-0.03%)` | :arrow_down: |
   | [...etmq/broker/processor/ConsumerManageProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0NvbnN1bWVyTWFuYWdlUHJvY2Vzc29yLmphdmE=) | `4.25% <0.00%> (-1.63%)` | :arrow_down: |
   | [...ocketmq/client/consumer/DefaultMQPushConsumer.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvRGVmYXVsdE1RUHVzaENvbnN1bWVyLmphdmE=) | `53.73% <0.00%> (-0.82%)` | :arrow_down: |
   | [...g/apache/rocketmq/client/impl/MQClientAPIImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9NUUNsaWVudEFQSUltcGwuamF2YQ==) | `11.97% <0.00%> (-0.22%)` | :arrow_down: |
   | [...he/rocketmq/client/impl/consumer/ProcessQueue.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Qcm9jZXNzUXVldWUuamF2YQ==) | `58.87% <0.00%> (-1.13%)` | :arrow_down: |
   | [...cketmq/client/impl/consumer/RebalancePushImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9SZWJhbGFuY2VQdXNoSW1wbC5qYXZh) | `34.23% <0.00%> (-1.28%)` | :arrow_down: |
   | [...lient/impl/consumer/DefaultMQPushConsumerImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9EZWZhdWx0TVFQdXNoQ29uc3VtZXJJbXBsLmphdmE=) | `39.41% <11.76%> (-0.76%)` | :arrow_down: |
   | [...etmq/broker/offset/ConsumerStageOffsetManager.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvb2Zmc2V0L0NvbnN1bWVyU3RhZ2VPZmZzZXRNYW5hZ2VyLmphdmE=) | `13.11% <13.11%> (ø)` | |
   | [...sumer/ConsumeMessageStagedConcurrentlyService.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Db25zdW1lTWVzc2FnZVN0YWdlZENvbmN1cnJlbnRseVNlcnZpY2UuamF2YQ==) | `17.28% <17.28%> (ø)` | |
   | [...a/org/apache/rocketmq/broker/BrokerController.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvQnJva2VyQ29udHJvbGxlci5qYXZh) | `44.83% <41.66%> (-0.07%)` | :arrow_down: |
   | ... and [264 more](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [df1d93f...817addd](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] dragon-zhang commented on a change in pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
dragon-zhang commented on a change in pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#discussion_r730342295



##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            groupByStrategy = currentStageOffsetMap.get(topic);
+        }
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());
+        if (null == groups) {
+            groups = groupByStrategy.get(strategyId);
+        }
+        groups.putIfAbsent(groupId, new AtomicInteger(0));
+        return groups.get(groupId);
+    }
+
+    private ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> convert(
+        Map<String, Map<String, Integer>> original) {
+        if (null == original) {
+            return new ConcurrentHashMap<>();
+        }
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> map = new ConcurrentHashMap<>(original.size());
+        for (Map.Entry<String, Map<String, Integer>> entry : original.entrySet()) {
+            String strategy = entry.getKey();
+            ConcurrentMap<String, AtomicInteger> temp = new ConcurrentHashMap<>();
+            Map<String, Integer> groups = entry.getValue();
+            for (Map.Entry<String, Integer> innerEntry : groups.entrySet()) {
+                String key = innerEntry.getKey();
+                Integer value = innerEntry.getValue();
+                temp.put(key, new AtomicInteger(value));
+            }
+            map.put(strategy, temp);
+        }
+        return map;
+    }
+
+    public int getCurrentLeftoverStage(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (Integer stageDefinition : summedStageDefinition) {
+                int left = stageDefinition - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return left;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndex(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (int i = 0; i < summedStageDefinition.size(); i++) {
+                int left = summedStageDefinition.get(i) - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return i;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndexAndUpdate(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId, int delta) {
+        final AtomicInteger offset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        synchronized (offset) {
+            try {
+                return getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId);
+            } finally {
+                offset.getAndAdd(delta);
+            }
+        }
+    }
+
+    @Override
+    public void updateCorePoolSize(int corePoolSize) {
+        if (corePoolSize > 0
+            && corePoolSize <= Short.MAX_VALUE
+            && corePoolSize < this.defaultMQPushConsumer.getConsumeThreadMax()) {
+            this.consumeExecutor.setCorePoolSize(corePoolSize);
+        }
+    }
+
+    @Override
+    public void incCorePoolSize() {
+    }
+
+    @Override
+    public void decCorePoolSize() {
+    }
+
+    @Override
+    public int getCorePoolSize() {
+        return this.consumeExecutor.getCorePoolSize();
+    }
+
+    @Override
+    public ConsumeMessageDirectlyResult consumeMessageDirectly(MessageExt msg, String brokerName) {
+        ConsumeMessageDirectlyResult result = new ConsumeMessageDirectlyResult();
+        result.setOrder(true);
+
+        String topic = msg.getTopic();
+        List<MessageExt> msgs = new ArrayList<MessageExt>();
+        msgs.add(msg);
+        MessageQueue mq = new MessageQueue();
+        mq.setBrokerName(brokerName);
+        mq.setTopic(topic);
+        mq.setQueueId(msg.getQueueId());
+
+        ConsumeStagedConcurrentlyContext context = new ConsumeStagedConcurrentlyContext(mq);
+
+        this.defaultMQPushConsumerImpl.resetRetryAndNamespace(msgs, this.consumerGroup);
+
+        final long beginTime = System.currentTimeMillis();
+
+        log.info("consumeMessageDirectly receive new message: {}", msg);
+
+        Set<MessageQueue> topicSubscribeInfo = this.defaultMQPushConsumerImpl.getRebalanceImpl().getTopicSubscribeInfo(topic);
+        MessageQueue messageQueue = null;
+        if (CollectionUtils.isNotEmpty(topicSubscribeInfo)) {
+            for (MessageQueue queue : topicSubscribeInfo) {
+                if (queue.getQueueId() == msg.getQueueId()) {
+                    messageQueue = queue;
+                    break;
+                }
+            }
+        }
+
+        try {
+            String strategyId = NULL;
+            try {
+                strategyId = String.valueOf(this.messageListener.computeStrategy(msg));
+            } catch (Exception e) {
+                log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+            }
+            String groupId = NULL;
+            try {
+                groupId = String.valueOf(this.messageListener.computeGroup(msg));
+            } catch (Exception e) {
+                log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+            }
+            context.setStrategyId(strategyId);
+            context.setGroupId(groupId);
+            //the test message should not update the stage offset
+            context.setStageIndex(getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId));
+            ConsumeOrderlyStatus status = this.messageListener.consumeMessage(msgs, context);
+            if (status != null) {
+                switch (status) {
+                    case COMMIT:
+                        result.setConsumeResult(CMResult.CR_COMMIT);
+                        break;
+                    case ROLLBACK:
+                        result.setConsumeResult(CMResult.CR_ROLLBACK);
+                        break;
+                    case SUCCESS:
+                        result.setConsumeResult(CMResult.CR_SUCCESS);
+                        break;
+                    case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                        result.setConsumeResult(CMResult.CR_LATER);
+                        break;
+                    default:
+                        break;
+                }
+            } else {
+                result.setConsumeResult(CMResult.CR_RETURN_NULL);
+            }
+            AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+            synchronized (currentStageOffset) {
+                int original = currentStageOffset.get();
+                this.messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                currentStageOffset.set(original);
+            }
+        } catch (Throwable e) {
+            result.setConsumeResult(CMResult.CR_THROW_EXCEPTION);
+            result.setRemark(RemotingHelper.exceptionSimpleDesc(e));
+
+            log.warn(String.format("consumeMessageDirectly exception: %s Group: %s Msgs: %s MQ: %s",
+                RemotingHelper.exceptionSimpleDesc(e),
+                ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                msgs,
+                mq), e);
+        }
+        result.setAutoCommit(context.isAutoCommit());
+        result.setSpentTimeMills(System.currentTimeMillis() - beginTime);
+
+        log.info("consumeMessageDirectly Result: {}", result);
+
+        return result;
+    }
+
+    @Override
+    public void submitConsumeRequest(
+        final List<MessageExt> msgs,
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final boolean dispatchToConsume) {
+        if (dispatchToConsume) {
+            DispatchRequest dispatchRequest = new DispatchRequest(processQueue, messageQueue);
+            this.dispatchExecutor.submit(dispatchRequest);
+        }
+    }
+
+    public synchronized void lockMQPeriodically() {
+        if (!this.stopped) {
+            this.defaultMQPushConsumerImpl.getRebalanceImpl().lockAll();
+        }
+    }
+
+    public void tryLockLaterAndReconsume(final MessageQueue mq, final ProcessQueue processQueue,
+        final long delayMills) {
+        this.scheduledExecutorService.schedule(new Runnable() {
+            @Override
+            public void run() {
+                boolean lockOK = ConsumeMessageStagedConcurrentlyService.this.lockOneMQ(mq);
+                if (lockOK) {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 10);
+                } else {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 3000);
+                }
+            }
+        }, delayMills, TimeUnit.MILLISECONDS);
+    }
+
+    public synchronized boolean lockOneMQ(final MessageQueue mq) {
+        if (!this.stopped) {
+            return this.defaultMQPushConsumerImpl.getRebalanceImpl().lock(mq);
+        }
+
+        return false;
+    }
+
+    private void submitConsumeRequestLater(
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final long suspendTimeMillis
+    ) {
+        long timeMillis = suspendTimeMillis;
+        if (timeMillis == -1) {
+            timeMillis = this.defaultMQPushConsumer.getSuspendCurrentQueueTimeMillis();
+        }
+
+        if (timeMillis < 10) {
+            timeMillis = 10;
+        } else if (timeMillis > 30000) {
+            timeMillis = 30000;
+        }
+
+        this.scheduledExecutorService.schedule(new Runnable() {
+
+            @Override
+            public void run() {
+                ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequest(null, processQueue, messageQueue, true);
+            }
+        }, timeMillis, TimeUnit.MILLISECONDS);
+    }
+
+    public boolean processConsumeResult(
+        final String strategyId,
+        final String groupId,
+        final List<MessageExt> msgs,
+        final ConsumeOrderlyStatus status,
+        final ConsumeStagedConcurrentlyContext context,
+        final ConsumeRequest consumeRequest
+    ) {
+        MessageQueue messageQueue = consumeRequest.getMessageQueue();
+        String topic = messageQueue.getTopic();
+        AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        boolean continueConsume = true;
+        long commitOffset = -1L;
+        int commitStageOffset = -1;
+        if (context.isAutoCommit()) {
+            switch (status) {
+                case COMMIT:
+                case ROLLBACK:
+                    log.warn("the message queue consume result is illegal, we think you want to ack these message {}",
+                        messageQueue);
+                case SUCCESS:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    } else {
+                        commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                        commitStageOffset = currentStageOffset.get();
+                    }
+                    break;
+                default:
+                    break;
+            }
+        } else {
+            switch (status) {
+                case SUCCESS:
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case COMMIT:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    break;
+                case ROLLBACK:
+                    consumeRequest.getProcessQueue().rollback();
+                    this.submitConsumeRequestLater(
+                        consumeRequest.getProcessQueue(),
+                        messageQueue,
+                        context.getSuspendCurrentQueueTimeMillis());
+                    continueConsume = false;
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    }
+                    break;
+                default:
+                    break;
+            }
+        }
+
+        if (commitOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(messageQueue, commitOffset, false);
+        }
+
+        if (stageOffsetStore != null && commitStageOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            synchronized (currentStageOffset) {
+                messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                //prevent users from resetting the value of currentStageOffset to a value less than 0
+                currentStageOffset.set(Math.max(0, currentStageOffset.get()));
+            }
+            commitStageOffset = currentStageOffset.get();
+            if (!consumeRequest.getProcessQueue().isDropped()) {
+                stageOffsetStore.updateStageOffset(messageQueue, strategyId, groupId, commitStageOffset, false);
+            }
+        }
+
+        return continueConsume;
+    }
+
+    public ConsumerStatsManager getConsumerStatsManager() {
+        return this.defaultMQPushConsumerImpl.getConsumerStatsManager();
+    }
+
+    private int getMaxReconsumeTimes() {
+        // default reconsume times: Integer.MAX_VALUE
+        if (this.defaultMQPushConsumer.getMaxReconsumeTimes() == -1) {
+            return Integer.MAX_VALUE;
+        } else {
+            return this.defaultMQPushConsumer.getMaxReconsumeTimes();
+        }
+    }
+
+    private boolean checkReconsumeTimes(List<MessageExt> msgs) {
+        boolean suspend = false;
+        if (msgs != null && !msgs.isEmpty()) {
+            for (MessageExt msg : msgs) {
+                if (msg.getReconsumeTimes() >= getMaxReconsumeTimes()) {
+                    MessageAccessor.setReconsumeTime(msg, String.valueOf(msg.getReconsumeTimes()));
+                    if (!sendMessageBack(msg)) {
+                        suspend = true;
+                        msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                    }
+                } else {
+                    suspend = true;
+                    msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                }
+            }
+        }
+        return suspend;
+    }
+
+    public boolean sendMessageBack(final MessageExt msg) {
+        try {
+            // max reconsume times exceeded then send to dead letter queue.
+            Message newMsg = new Message(MixAll.getRetryTopic(this.defaultMQPushConsumer.getConsumerGroup()), msg.getBody());
+            String originMsgId = MessageAccessor.getOriginMessageId(msg);
+            MessageAccessor.setOriginMessageId(newMsg, UtilAll.isBlank(originMsgId) ? msg.getMsgId() : originMsgId);
+            newMsg.setFlag(msg.getFlag());
+            MessageAccessor.setProperties(newMsg, msg.getProperties());
+            MessageAccessor.putProperty(newMsg, MessageConst.PROPERTY_RETRY_TOPIC, msg.getTopic());
+            MessageAccessor.setReconsumeTime(newMsg, String.valueOf(msg.getReconsumeTimes()));
+            MessageAccessor.setMaxReconsumeTimes(newMsg, String.valueOf(getMaxReconsumeTimes()));
+            MessageAccessor.clearProperty(newMsg, MessageConst.PROPERTY_TRANSACTION_PREPARED);
+            newMsg.setDelayTimeLevel(3 + msg.getReconsumeTimes());
+
+            this.defaultMQPushConsumer.getDefaultMQPushConsumerImpl().getmQClientFactory().getDefaultMQProducer().send(newMsg);
+            return true;
+        } catch (Exception e) {
+            log.error("sendMessageBack exception, group: " + this.consumerGroup + " msg: " + msg.toString(), e);
+        }
+
+        return false;
+    }
+
+    public void resetNamespace(final List<MessageExt> msgs) {
+        for (MessageExt msg : msgs) {
+            if (StringUtils.isNotEmpty(this.defaultMQPushConsumer.getNamespace())) {
+                msg.setTopic(NamespaceUtil.withoutNamespace(msg.getTopic(), this.defaultMQPushConsumer.getNamespace()));
+            }
+        }
+    }
+
+    class DispatchRequest implements Runnable {
+        private final ProcessQueue processQueue;
+        private final MessageQueue messageQueue;
+
+        public DispatchRequest(ProcessQueue processQueue,
+            MessageQueue messageQueue) {
+            this.processQueue = processQueue;
+            this.messageQueue = messageQueue;
+        }
+
+        @Override
+        public void run() {
+            if (this.processQueue.isDropped()) {
+                log.warn("run, the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                return;
+            }
+
+            String topic = this.messageQueue.getTopic();
+            final Object objLock = messageQueueLock.fetchLockObject(this.messageQueue);
+            synchronized (objLock) {
+                if (MessageModel.BROADCASTING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                    || (this.processQueue.isLocked() && !this.processQueue.isLockExpired())) {
+                    final long beginTime = System.currentTimeMillis();
+                    for (final AtomicBoolean continueConsume = new AtomicBoolean(true); continueConsume.get(); ) {
+                        if (this.processQueue.isDropped()) {
+                            log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && !this.processQueue.isLocked()) {
+                            log.warn("the message queue not locked, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && this.processQueue.isLockExpired()) {
+                            log.warn("the message queue lock expired, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        long interval = System.currentTimeMillis() - beginTime;
+                        if (interval > MAX_TIME_CONSUME_CONTINUOUSLY) {
+                            ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, messageQueue, 10);
+                            break;
+                        }
+
+                        final int consumeBatchSize =
+                            ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
+                        int takeSize = ConsumeMessageStagedConcurrentlyService.this.pullBatchSize * consumeBatchSize;

Review comment:
       Take out enough messages for better grouping. The size of `takeSize` is not a problem

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);

Review comment:
       For example, there are 5 order messages. The only difference between them is the status(1,2,3,4,5).`stageOffset` means `status index` such as `0 1 2 3 4`, or you mean different MQ instances?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] dragon-zhang commented on a change in pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
dragon-zhang commented on a change in pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#discussion_r730397064



##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -179,15 +179,20 @@ public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String top
         if (null == groupByStrategy) {
             ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
                 new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
-            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            currentStageOffsetMap.put(topic, stageOffset);
             groupByStrategy = currentStageOffsetMap.get(topic);
         }
-        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.get(strategyId);
         if (null == groups) {
+            groupByStrategy.put(strategyId, new ConcurrentHashMap<>());

Review comment:
       ```java
       public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
           String groupId) {
           if (null == strategyId || NULL.equals(strategyId)) {
               return new AtomicInteger(-1);
           }
           groupId = String.valueOf(groupId);
           ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.computeIfAbsent(topic,
               key -> (stageOffsetStore == null ? new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE)))
           );
           ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.computeIfAbsent(strategyId, key -> new ConcurrentHashMap<>());
           return groups.computeIfAbsent(groupId, key -> new AtomicInteger(0));
       }
   ```
   How about this one ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479


   # [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#2983](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (c01637d) into [develop](https://codecov.io/gh/apache/rocketmq/commit/a1babab507934e81f0e05b2867566c8b459be341?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a1babab) will **increase** coverage by `0.52%`.
   > The diff coverage is `29.92%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/rocketmq/pull/2983/graphs/tree.svg?width=650&height=150&src=pr&token=4w0sxP1wZv&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   
   ```diff
   @@              Coverage Diff              @@
   ##             develop    #2983      +/-   ##
   =============================================
   + Coverage      47.81%   48.33%   +0.52%     
   + Complexity      4550     3704     -846     
   =============================================
     Files            552      319     -233     
     Lines          36628    30182    -6446     
     Branches        4844     4323     -521     
   =============================================
   - Hits           17513    14588    -2925     
   + Misses         16885    13584    -3301     
   + Partials        2230     2010     -220     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...ocketmq/broker/processor/AdminBrokerProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0FkbWluQnJva2VyUHJvY2Vzc29yLmphdmE=) | `7.93% <0.00%> (-0.03%)` | :arrow_down: |
   | [...etmq/broker/processor/ConsumerManageProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0NvbnN1bWVyTWFuYWdlUHJvY2Vzc29yLmphdmE=) | `4.25% <0.00%> (-1.63%)` | :arrow_down: |
   | [...ocketmq/client/consumer/DefaultMQPushConsumer.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvRGVmYXVsdE1RUHVzaENvbnN1bWVyLmphdmE=) | `53.73% <0.00%> (-0.82%)` | :arrow_down: |
   | [...er/listener/MessageListenerStagedConcurrently.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvbGlzdGVuZXIvTWVzc2FnZUxpc3RlbmVyU3RhZ2VkQ29uY3VycmVudGx5LmphdmE=) | `0.00% <0.00%> (ø)` | |
   | [...g/apache/rocketmq/client/impl/MQClientAPIImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9NUUNsaWVudEFQSUltcGwuamF2YQ==) | `11.97% <0.00%> (-0.22%)` | :arrow_down: |
   | [...he/rocketmq/client/impl/consumer/ProcessQueue.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Qcm9jZXNzUXVldWUuamF2YQ==) | `55.41% <0.00%> (-4.13%)` | :arrow_down: |
   | [...cketmq/client/impl/consumer/RebalancePushImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9SZWJhbGFuY2VQdXNoSW1wbC5qYXZh) | `34.23% <0.00%> (-1.28%)` | :arrow_down: |
   | [...lient/impl/consumer/DefaultMQPushConsumerImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9EZWZhdWx0TVFQdXNoQ29uc3VtZXJJbXBsLmphdmE=) | `39.41% <11.76%> (-0.76%)` | :arrow_down: |
   | [...etmq/broker/offset/ConsumerStageOffsetManager.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvb2Zmc2V0L0NvbnN1bWVyU3RhZ2VPZmZzZXRNYW5hZ2VyLmphdmE=) | `14.81% <14.81%> (ø)` | |
   | [...sumer/ConsumeMessageStagedConcurrentlyService.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Db25zdW1lTWVzc2FnZVN0YWdlZENvbmN1cnJlbnRseVNlcnZpY2UuamF2YQ==) | `20.32% <20.32%> (ø)` | |
   | ... and [267 more](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [a1babab...c01637d](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] dragon-zhang commented on a change in pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
dragon-zhang commented on a change in pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#discussion_r730260513



##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {

Review comment:
       Can you give specific reasons?

##########
File path: common/src/main/java/org/apache/rocketmq/common/concurrent/ConcurrentEngine.java
##########
@@ -0,0 +1,463 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.common.concurrent;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Supplier;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.constant.LoggerName;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.logging.InternalLoggerFactory;
+
+public class ConcurrentEngine {
+
+    protected static final InternalLogger log = InternalLoggerFactory.getLogger(LoggerName.COMMON_LOGGER_NAME);
+
+    protected final ExecutorService enginePool;
+
+    public ConcurrentEngine() {
+        this.enginePool = ForkJoinPool.commonPool();
+    }
+
+    public ConcurrentEngine(ExecutorService enginePool) {
+        this.enginePool = enginePool;
+    }
+
+    public final void runAsync(Runnable... tasks) {
+        runAsync(UtilAll.newArrayList(tasks));
+    }
+
+    protected static <E> List<E> pollAllTask(Queue<E> tasks) {
+        //avoid list expansion
+        List<E> list = new LinkedList<>();
+        while (tasks != null && !tasks.isEmpty()) {
+            E task = tasks.poll();
+            list.add(task);
+        }
+        return list;
+    }
+
+    protected static <T> void doCallback(CallableSupplier<T> supplier, T response) {
+        Collection<Callback<T>> callbacks = supplier.getCallbacks();
+        if (CollectionUtils.isNotEmpty(callbacks)) {
+            for (Callback<T> callback : callbacks) {
+                callback.call(response);
+            }
+        }
+    }
+
+    public final void runAsync(Queue<? extends Runnable> tasks) {
+        runAsync(pollAllTask(tasks));
+    }
+
+    public final void runAsync(Collection<? extends Runnable> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return;
+        }
+        List<CompletableFuture<Void>> list = new ArrayList<>(tasks.size());
+        for (Runnable task : tasks) {
+            list.add(CompletableFuture.runAsync(task, enginePool));
+        }
+        executeAsync(list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyAsync(Supplier<T>... tasks) {
+        return supplyAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(Queue<? extends Supplier<T>> tasks) {
+        return supplyAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyAsync(Collection<? extends Supplier<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        List<CompletableFuture<T>> list = new ArrayList<>(tasks.size());
+        for (Supplier<T> task : tasks) {
+            list.add(CompletableFuture.supplyAsync(task, enginePool));
+        }
+        return executeAsync(list);
+    }
+
+    @SafeVarargs
+    public final <T> List<T> supplyCallableAsync(CallableSupplier<T>... tasks) {
+        return supplyCallableAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(Queue<? extends CallableSupplier<T>> tasks) {
+        return supplyCallableAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> supplyCallableAsync(Collection<? extends CallableSupplier<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new ArrayList<>();
+        }
+        Map<CallableSupplier<T>, CompletableFuture<T>> map = new HashMap<>(tasks.size());
+        for (CallableSupplier<T> task : tasks) {
+            map.put(task, CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<CallableSupplier<T>, T> result = executeKeyedAsync(map);
+        for (Map.Entry<CallableSupplier<T>, T> entry : result.entrySet()) {
+            doCallback(entry.getKey(), entry.getValue());
+        }
+        return UtilAll.newArrayList(result.values());
+    }
+
+    @SafeVarargs
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(KeyedCallableSupplier<K, V>... tasks) {
+        return supplyKeyedCallableAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(Queue<? extends KeyedCallableSupplier<K, V>> tasks) {
+        return supplyKeyedCallableAsync(pollAllTask(tasks));
+    }
+
+    public final <K, V> Map<K, V> supplyKeyedCallableAsync(Collection<? extends KeyedCallableSupplier<K, V>> tasks) {
+        if (CollectionUtils.isEmpty(tasks) || enginePool.isShutdown()) {
+            return new HashMap<>();
+        }
+        Map<K, CompletableFuture<V>> map = new HashMap<>(tasks.size());
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            map.put(task.key(), CompletableFuture.supplyAsync(task, enginePool));
+        }
+        Map<K, V> result = executeKeyedAsync(map);
+        for (KeyedCallableSupplier<K, V> task : tasks) {
+            K key = task.key();
+            V response = result.get(key);
+            doCallback(task, response);
+        }
+        return result;
+    }
+
+    @SafeVarargs
+    public final <T> List<T> executeAsync(CompletableFuture<T>... tasks) {
+        return executeAsync(UtilAll.newArrayList(tasks));
+    }
+
+    public final <T> List<T> executeAsync(Queue<CompletableFuture<T>> tasks) {
+        return executeAsync(pollAllTask(tasks));
+    }
+
+    public final <T> List<T> executeAsync(Collection<CompletableFuture<T>> tasks) {
+        if (CollectionUtils.isEmpty(tasks)) {
+            return new ArrayList<>();
+        }
+        try {
+            CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0])).join();
+        } catch (Exception e) {
+            log.error("tasks executeAsync failed with exception:{}", e.getMessage(), e);
+            e.printStackTrace();
+        }
+        return getResultIgnoreException(tasks);
+    }
+
+    public final <T> List<T> getResultIgnoreException(Collection<CompletableFuture<T>> tasks) {
+        List<T> result = new ArrayList<>(tasks.size());
+        for (CompletableFuture<T> completableFuture : tasks) {
+            if (null == completableFuture) {
+                continue;
+            }
+            try {
+                T response = completableFuture.get();
+                if (null != response) {
+                    result.add(response);
+                }
+            } catch (Exception e) {
+                log.error("task:{} execute failed with exception:{}", completableFuture, e.getMessage(), e);
+            }
+        }
+        return result;

Review comment:
       Acceptable, because if the consumption fails, it will be taken again

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            groupByStrategy = currentStageOffsetMap.get(topic);
+        }
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());
+        if (null == groups) {
+            groups = groupByStrategy.get(strategyId);
+        }
+        groups.putIfAbsent(groupId, new AtomicInteger(0));
+        return groups.get(groupId);
+    }
+
+    private ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> convert(
+        Map<String, Map<String, Integer>> original) {
+        if (null == original) {
+            return new ConcurrentHashMap<>();
+        }
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> map = new ConcurrentHashMap<>(original.size());
+        for (Map.Entry<String, Map<String, Integer>> entry : original.entrySet()) {
+            String strategy = entry.getKey();
+            ConcurrentMap<String, AtomicInteger> temp = new ConcurrentHashMap<>();
+            Map<String, Integer> groups = entry.getValue();
+            for (Map.Entry<String, Integer> innerEntry : groups.entrySet()) {
+                String key = innerEntry.getKey();
+                Integer value = innerEntry.getValue();
+                temp.put(key, new AtomicInteger(value));
+            }
+            map.put(strategy, temp);
+        }
+        return map;
+    }
+
+    public int getCurrentLeftoverStage(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (Integer stageDefinition : summedStageDefinition) {
+                int left = stageDefinition - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return left;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndex(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (int i = 0; i < summedStageDefinition.size(); i++) {
+                int left = summedStageDefinition.get(i) - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return i;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndexAndUpdate(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId, int delta) {
+        final AtomicInteger offset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        synchronized (offset) {
+            try {
+                return getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId);
+            } finally {
+                offset.getAndAdd(delta);
+            }
+        }
+    }
+
+    @Override
+    public void updateCorePoolSize(int corePoolSize) {
+        if (corePoolSize > 0
+            && corePoolSize <= Short.MAX_VALUE
+            && corePoolSize < this.defaultMQPushConsumer.getConsumeThreadMax()) {
+            this.consumeExecutor.setCorePoolSize(corePoolSize);
+        }
+    }
+
+    @Override
+    public void incCorePoolSize() {
+    }
+
+    @Override
+    public void decCorePoolSize() {
+    }
+
+    @Override
+    public int getCorePoolSize() {
+        return this.consumeExecutor.getCorePoolSize();
+    }
+
+    @Override
+    public ConsumeMessageDirectlyResult consumeMessageDirectly(MessageExt msg, String brokerName) {
+        ConsumeMessageDirectlyResult result = new ConsumeMessageDirectlyResult();
+        result.setOrder(true);
+
+        String topic = msg.getTopic();
+        List<MessageExt> msgs = new ArrayList<MessageExt>();
+        msgs.add(msg);
+        MessageQueue mq = new MessageQueue();
+        mq.setBrokerName(brokerName);
+        mq.setTopic(topic);
+        mq.setQueueId(msg.getQueueId());
+
+        ConsumeStagedConcurrentlyContext context = new ConsumeStagedConcurrentlyContext(mq);
+
+        this.defaultMQPushConsumerImpl.resetRetryAndNamespace(msgs, this.consumerGroup);
+
+        final long beginTime = System.currentTimeMillis();
+
+        log.info("consumeMessageDirectly receive new message: {}", msg);
+
+        Set<MessageQueue> topicSubscribeInfo = this.defaultMQPushConsumerImpl.getRebalanceImpl().getTopicSubscribeInfo(topic);
+        MessageQueue messageQueue = null;
+        if (CollectionUtils.isNotEmpty(topicSubscribeInfo)) {
+            for (MessageQueue queue : topicSubscribeInfo) {
+                if (queue.getQueueId() == msg.getQueueId()) {
+                    messageQueue = queue;
+                    break;
+                }
+            }
+        }
+
+        try {
+            String strategyId = NULL;
+            try {
+                strategyId = String.valueOf(this.messageListener.computeStrategy(msg));
+            } catch (Exception e) {
+                log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+            }
+            String groupId = NULL;
+            try {
+                groupId = String.valueOf(this.messageListener.computeGroup(msg));
+            } catch (Exception e) {
+                log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+            }
+            context.setStrategyId(strategyId);
+            context.setGroupId(groupId);
+            //the test message should not update the stage offset
+            context.setStageIndex(getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId));
+            ConsumeOrderlyStatus status = this.messageListener.consumeMessage(msgs, context);
+            if (status != null) {
+                switch (status) {
+                    case COMMIT:
+                        result.setConsumeResult(CMResult.CR_COMMIT);
+                        break;
+                    case ROLLBACK:
+                        result.setConsumeResult(CMResult.CR_ROLLBACK);
+                        break;
+                    case SUCCESS:
+                        result.setConsumeResult(CMResult.CR_SUCCESS);
+                        break;
+                    case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                        result.setConsumeResult(CMResult.CR_LATER);
+                        break;
+                    default:
+                        break;
+                }
+            } else {
+                result.setConsumeResult(CMResult.CR_RETURN_NULL);
+            }
+            AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+            synchronized (currentStageOffset) {
+                int original = currentStageOffset.get();
+                this.messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                currentStageOffset.set(original);
+            }
+        } catch (Throwable e) {
+            result.setConsumeResult(CMResult.CR_THROW_EXCEPTION);
+            result.setRemark(RemotingHelper.exceptionSimpleDesc(e));
+
+            log.warn(String.format("consumeMessageDirectly exception: %s Group: %s Msgs: %s MQ: %s",
+                RemotingHelper.exceptionSimpleDesc(e),
+                ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                msgs,
+                mq), e);
+        }
+        result.setAutoCommit(context.isAutoCommit());
+        result.setSpentTimeMills(System.currentTimeMillis() - beginTime);
+
+        log.info("consumeMessageDirectly Result: {}", result);
+
+        return result;
+    }
+
+    @Override
+    public void submitConsumeRequest(
+        final List<MessageExt> msgs,
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final boolean dispatchToConsume) {
+        if (dispatchToConsume) {
+            DispatchRequest dispatchRequest = new DispatchRequest(processQueue, messageQueue);
+            this.dispatchExecutor.submit(dispatchRequest);
+        }
+    }
+
+    public synchronized void lockMQPeriodically() {
+        if (!this.stopped) {
+            this.defaultMQPushConsumerImpl.getRebalanceImpl().lockAll();
+        }
+    }
+
+    public void tryLockLaterAndReconsume(final MessageQueue mq, final ProcessQueue processQueue,
+        final long delayMills) {
+        this.scheduledExecutorService.schedule(new Runnable() {
+            @Override
+            public void run() {
+                boolean lockOK = ConsumeMessageStagedConcurrentlyService.this.lockOneMQ(mq);
+                if (lockOK) {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 10);
+                } else {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 3000);
+                }
+            }
+        }, delayMills, TimeUnit.MILLISECONDS);
+    }
+
+    public synchronized boolean lockOneMQ(final MessageQueue mq) {
+        if (!this.stopped) {
+            return this.defaultMQPushConsumerImpl.getRebalanceImpl().lock(mq);
+        }
+
+        return false;
+    }
+
+    private void submitConsumeRequestLater(
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final long suspendTimeMillis
+    ) {
+        long timeMillis = suspendTimeMillis;
+        if (timeMillis == -1) {
+            timeMillis = this.defaultMQPushConsumer.getSuspendCurrentQueueTimeMillis();
+        }
+
+        if (timeMillis < 10) {
+            timeMillis = 10;
+        } else if (timeMillis > 30000) {
+            timeMillis = 30000;
+        }
+
+        this.scheduledExecutorService.schedule(new Runnable() {
+
+            @Override
+            public void run() {
+                ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequest(null, processQueue, messageQueue, true);
+            }
+        }, timeMillis, TimeUnit.MILLISECONDS);
+    }
+
+    public boolean processConsumeResult(
+        final String strategyId,
+        final String groupId,
+        final List<MessageExt> msgs,
+        final ConsumeOrderlyStatus status,
+        final ConsumeStagedConcurrentlyContext context,
+        final ConsumeRequest consumeRequest
+    ) {
+        MessageQueue messageQueue = consumeRequest.getMessageQueue();
+        String topic = messageQueue.getTopic();
+        AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        boolean continueConsume = true;
+        long commitOffset = -1L;
+        int commitStageOffset = -1;
+        if (context.isAutoCommit()) {
+            switch (status) {
+                case COMMIT:
+                case ROLLBACK:
+                    log.warn("the message queue consume result is illegal, we think you want to ack these message {}",
+                        messageQueue);
+                case SUCCESS:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    } else {
+                        commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                        commitStageOffset = currentStageOffset.get();
+                    }
+                    break;
+                default:
+                    break;
+            }
+        } else {
+            switch (status) {
+                case SUCCESS:
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case COMMIT:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    break;
+                case ROLLBACK:
+                    consumeRequest.getProcessQueue().rollback();
+                    this.submitConsumeRequestLater(
+                        consumeRequest.getProcessQueue(),
+                        messageQueue,
+                        context.getSuspendCurrentQueueTimeMillis());
+                    continueConsume = false;
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    }
+                    break;
+                default:
+                    break;
+            }
+        }
+
+        if (commitOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(messageQueue, commitOffset, false);
+        }
+
+        if (stageOffsetStore != null && commitStageOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            synchronized (currentStageOffset) {
+                messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                //prevent users from resetting the value of currentStageOffset to a value less than 0
+                currentStageOffset.set(Math.max(0, currentStageOffset.get()));
+            }
+            commitStageOffset = currentStageOffset.get();
+            if (!consumeRequest.getProcessQueue().isDropped()) {
+                stageOffsetStore.updateStageOffset(messageQueue, strategyId, groupId, commitStageOffset, false);
+            }
+        }
+
+        return continueConsume;
+    }
+
+    public ConsumerStatsManager getConsumerStatsManager() {
+        return this.defaultMQPushConsumerImpl.getConsumerStatsManager();
+    }
+
+    private int getMaxReconsumeTimes() {
+        // default reconsume times: Integer.MAX_VALUE
+        if (this.defaultMQPushConsumer.getMaxReconsumeTimes() == -1) {
+            return Integer.MAX_VALUE;
+        } else {
+            return this.defaultMQPushConsumer.getMaxReconsumeTimes();
+        }
+    }
+
+    private boolean checkReconsumeTimes(List<MessageExt> msgs) {
+        boolean suspend = false;
+        if (msgs != null && !msgs.isEmpty()) {
+            for (MessageExt msg : msgs) {
+                if (msg.getReconsumeTimes() >= getMaxReconsumeTimes()) {
+                    MessageAccessor.setReconsumeTime(msg, String.valueOf(msg.getReconsumeTimes()));
+                    if (!sendMessageBack(msg)) {
+                        suspend = true;
+                        msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                    }
+                } else {
+                    suspend = true;
+                    msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                }
+            }
+        }
+        return suspend;
+    }
+
+    public boolean sendMessageBack(final MessageExt msg) {
+        try {
+            // max reconsume times exceeded then send to dead letter queue.
+            Message newMsg = new Message(MixAll.getRetryTopic(this.defaultMQPushConsumer.getConsumerGroup()), msg.getBody());
+            String originMsgId = MessageAccessor.getOriginMessageId(msg);
+            MessageAccessor.setOriginMessageId(newMsg, UtilAll.isBlank(originMsgId) ? msg.getMsgId() : originMsgId);
+            newMsg.setFlag(msg.getFlag());
+            MessageAccessor.setProperties(newMsg, msg.getProperties());
+            MessageAccessor.putProperty(newMsg, MessageConst.PROPERTY_RETRY_TOPIC, msg.getTopic());
+            MessageAccessor.setReconsumeTime(newMsg, String.valueOf(msg.getReconsumeTimes()));
+            MessageAccessor.setMaxReconsumeTimes(newMsg, String.valueOf(getMaxReconsumeTimes()));
+            MessageAccessor.clearProperty(newMsg, MessageConst.PROPERTY_TRANSACTION_PREPARED);
+            newMsg.setDelayTimeLevel(3 + msg.getReconsumeTimes());
+
+            this.defaultMQPushConsumer.getDefaultMQPushConsumerImpl().getmQClientFactory().getDefaultMQProducer().send(newMsg);
+            return true;
+        } catch (Exception e) {
+            log.error("sendMessageBack exception, group: " + this.consumerGroup + " msg: " + msg.toString(), e);
+        }
+
+        return false;
+    }
+
+    public void resetNamespace(final List<MessageExt> msgs) {
+        for (MessageExt msg : msgs) {
+            if (StringUtils.isNotEmpty(this.defaultMQPushConsumer.getNamespace())) {
+                msg.setTopic(NamespaceUtil.withoutNamespace(msg.getTopic(), this.defaultMQPushConsumer.getNamespace()));
+            }
+        }
+    }
+
+    class DispatchRequest implements Runnable {
+        private final ProcessQueue processQueue;
+        private final MessageQueue messageQueue;
+
+        public DispatchRequest(ProcessQueue processQueue,
+            MessageQueue messageQueue) {
+            this.processQueue = processQueue;
+            this.messageQueue = messageQueue;
+        }
+
+        @Override
+        public void run() {
+            if (this.processQueue.isDropped()) {
+                log.warn("run, the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                return;
+            }
+
+            String topic = this.messageQueue.getTopic();
+            final Object objLock = messageQueueLock.fetchLockObject(this.messageQueue);
+            synchronized (objLock) {
+                if (MessageModel.BROADCASTING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                    || (this.processQueue.isLocked() && !this.processQueue.isLockExpired())) {
+                    final long beginTime = System.currentTimeMillis();
+                    for (final AtomicBoolean continueConsume = new AtomicBoolean(true); continueConsume.get(); ) {
+                        if (this.processQueue.isDropped()) {
+                            log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && !this.processQueue.isLocked()) {
+                            log.warn("the message queue not locked, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && this.processQueue.isLockExpired()) {
+                            log.warn("the message queue lock expired, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        long interval = System.currentTimeMillis() - beginTime;
+                        if (interval > MAX_TIME_CONSUME_CONTINUOUSLY) {
+                            ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, messageQueue, 10);
+                            break;
+                        }
+
+                        final int consumeBatchSize =
+                            ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
+                        int takeSize = ConsumeMessageStagedConcurrentlyService.this.pullBatchSize * consumeBatchSize;

Review comment:
       Consider `pullBatchSize` as the number of packets, parallel between packets, serial within packets, and serial packets consume `consumeBatchSize` messages each time

##########
File path: common/src/main/java/org/apache/rocketmq/common/concurrent/PriorityConcurrentEngine.java
##########
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.common.concurrent;
+
+import java.util.Collection;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ConcurrentNavigableMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutorService;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.rocketmq.common.UtilAll;
+
+public class PriorityConcurrentEngine extends ConcurrentEngine {
+
+    /**
+     * highest priority
+     */
+    public static final Integer MAX_PRIORITY = Integer.MIN_VALUE;
+
+    /**
+     * lowest priority
+     */
+    public static final Integer MIN_PRIORITY = Integer.MAX_VALUE;
+
+    private final StagedConcurrentConsumeService consumeService = new StagedConcurrentConsumeService(this);
+
+    private final ConcurrentNavigableMap<Integer, Queue<Object>> priorityTasks = new ConcurrentSkipListMap<>();
+
+    public PriorityConcurrentEngine() {
+        super();
+    }
+
+    public PriorityConcurrentEngine(ExecutorService enginePool) {
+        super(enginePool);
+    }
+
+    public final void runPriorityAsync(Runnable... tasks) {
+        runPriorityAsync(MIN_PRIORITY, tasks);
+    }
+
+    public final void runPriorityAsync(Queue<Runnable> tasks) {
+        runPriorityAsync(MIN_PRIORITY, tasks);
+    }
+
+    public final void runPriorityAsync(Collection<Runnable> tasks) {
+        runPriorityAsync(MIN_PRIORITY, tasks);
+    }
+
+    public final void runPriorityAsync(Integer priority, Runnable... tasks) {
+        runPriorityAsync(priority, UtilAll.newArrayList(tasks));
+    }
+
+    public final void runPriorityAsync(Integer priority, Queue<? extends Runnable> tasks) {
+        runPriorityAsync(priority, pollAllTask(tasks));
+    }
+
+    public final void runPriorityAsync(Integer priority, Collection<? extends Runnable> tasks) {
+        if (CollectionUtils.isEmpty(tasks)) {
+            return;
+        }
+        Queue<Object> queue = priorityTasks.putIfAbsent(priority, new ConcurrentLinkedQueue<>());

Review comment:
       got it

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);

Review comment:
       To report consumption progress

##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);
+            groupByStrategy = currentStageOffsetMap.get(topic);
+        }
+        ConcurrentMap<String, AtomicInteger> groups = groupByStrategy.putIfAbsent(strategyId, new ConcurrentHashMap<>());
+        if (null == groups) {
+            groups = groupByStrategy.get(strategyId);
+        }
+        groups.putIfAbsent(groupId, new AtomicInteger(0));
+        return groups.get(groupId);
+    }
+
+    private ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> convert(
+        Map<String, Map<String, Integer>> original) {
+        if (null == original) {
+            return new ConcurrentHashMap<>();
+        }
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> map = new ConcurrentHashMap<>(original.size());
+        for (Map.Entry<String, Map<String, Integer>> entry : original.entrySet()) {
+            String strategy = entry.getKey();
+            ConcurrentMap<String, AtomicInteger> temp = new ConcurrentHashMap<>();
+            Map<String, Integer> groups = entry.getValue();
+            for (Map.Entry<String, Integer> innerEntry : groups.entrySet()) {
+                String key = innerEntry.getKey();
+                Integer value = innerEntry.getValue();
+                temp.put(key, new AtomicInteger(value));
+            }
+            map.put(strategy, temp);
+        }
+        return map;
+    }
+
+    public int getCurrentLeftoverStage(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (Integer stageDefinition : summedStageDefinition) {
+                int left = stageDefinition - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return left;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndex(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId) {
+            return -1;
+        }
+        List<Integer> summedStageDefinition = summedStageDefinitionMap.get(strategyId);
+        if (CollectionUtils.isNotEmpty(summedStageDefinition)) {
+            for (int i = 0; i < summedStageDefinition.size(); i++) {
+                int left = summedStageDefinition.get(i) - getCurrentStageOffset(messageQueue, topic, strategyId, groupId).get();
+                if (left > 0) {
+                    return i;
+                }
+            }
+        }
+        return -1;
+    }
+
+    public int getCurrentLeftoverStageIndexAndUpdate(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId, int delta) {
+        final AtomicInteger offset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        synchronized (offset) {
+            try {
+                return getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId);
+            } finally {
+                offset.getAndAdd(delta);
+            }
+        }
+    }
+
+    @Override
+    public void updateCorePoolSize(int corePoolSize) {
+        if (corePoolSize > 0
+            && corePoolSize <= Short.MAX_VALUE
+            && corePoolSize < this.defaultMQPushConsumer.getConsumeThreadMax()) {
+            this.consumeExecutor.setCorePoolSize(corePoolSize);
+        }
+    }
+
+    @Override
+    public void incCorePoolSize() {
+    }
+
+    @Override
+    public void decCorePoolSize() {
+    }
+
+    @Override
+    public int getCorePoolSize() {
+        return this.consumeExecutor.getCorePoolSize();
+    }
+
+    @Override
+    public ConsumeMessageDirectlyResult consumeMessageDirectly(MessageExt msg, String brokerName) {
+        ConsumeMessageDirectlyResult result = new ConsumeMessageDirectlyResult();
+        result.setOrder(true);
+
+        String topic = msg.getTopic();
+        List<MessageExt> msgs = new ArrayList<MessageExt>();
+        msgs.add(msg);
+        MessageQueue mq = new MessageQueue();
+        mq.setBrokerName(brokerName);
+        mq.setTopic(topic);
+        mq.setQueueId(msg.getQueueId());
+
+        ConsumeStagedConcurrentlyContext context = new ConsumeStagedConcurrentlyContext(mq);
+
+        this.defaultMQPushConsumerImpl.resetRetryAndNamespace(msgs, this.consumerGroup);
+
+        final long beginTime = System.currentTimeMillis();
+
+        log.info("consumeMessageDirectly receive new message: {}", msg);
+
+        Set<MessageQueue> topicSubscribeInfo = this.defaultMQPushConsumerImpl.getRebalanceImpl().getTopicSubscribeInfo(topic);
+        MessageQueue messageQueue = null;
+        if (CollectionUtils.isNotEmpty(topicSubscribeInfo)) {
+            for (MessageQueue queue : topicSubscribeInfo) {
+                if (queue.getQueueId() == msg.getQueueId()) {
+                    messageQueue = queue;
+                    break;
+                }
+            }
+        }
+
+        try {
+            String strategyId = NULL;
+            try {
+                strategyId = String.valueOf(this.messageListener.computeStrategy(msg));
+            } catch (Exception e) {
+                log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+            }
+            String groupId = NULL;
+            try {
+                groupId = String.valueOf(this.messageListener.computeGroup(msg));
+            } catch (Exception e) {
+                log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+            }
+            context.setStrategyId(strategyId);
+            context.setGroupId(groupId);
+            //the test message should not update the stage offset
+            context.setStageIndex(getCurrentLeftoverStageIndex(messageQueue, topic, strategyId, groupId));
+            ConsumeOrderlyStatus status = this.messageListener.consumeMessage(msgs, context);
+            if (status != null) {
+                switch (status) {
+                    case COMMIT:
+                        result.setConsumeResult(CMResult.CR_COMMIT);
+                        break;
+                    case ROLLBACK:
+                        result.setConsumeResult(CMResult.CR_ROLLBACK);
+                        break;
+                    case SUCCESS:
+                        result.setConsumeResult(CMResult.CR_SUCCESS);
+                        break;
+                    case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                        result.setConsumeResult(CMResult.CR_LATER);
+                        break;
+                    default:
+                        break;
+                }
+            } else {
+                result.setConsumeResult(CMResult.CR_RETURN_NULL);
+            }
+            AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+            synchronized (currentStageOffset) {
+                int original = currentStageOffset.get();
+                this.messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                currentStageOffset.set(original);
+            }
+        } catch (Throwable e) {
+            result.setConsumeResult(CMResult.CR_THROW_EXCEPTION);
+            result.setRemark(RemotingHelper.exceptionSimpleDesc(e));
+
+            log.warn(String.format("consumeMessageDirectly exception: %s Group: %s Msgs: %s MQ: %s",
+                RemotingHelper.exceptionSimpleDesc(e),
+                ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                msgs,
+                mq), e);
+        }
+        result.setAutoCommit(context.isAutoCommit());
+        result.setSpentTimeMills(System.currentTimeMillis() - beginTime);
+
+        log.info("consumeMessageDirectly Result: {}", result);
+
+        return result;
+    }
+
+    @Override
+    public void submitConsumeRequest(
+        final List<MessageExt> msgs,
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final boolean dispatchToConsume) {
+        if (dispatchToConsume) {
+            DispatchRequest dispatchRequest = new DispatchRequest(processQueue, messageQueue);
+            this.dispatchExecutor.submit(dispatchRequest);
+        }
+    }
+
+    public synchronized void lockMQPeriodically() {
+        if (!this.stopped) {
+            this.defaultMQPushConsumerImpl.getRebalanceImpl().lockAll();
+        }
+    }
+
+    public void tryLockLaterAndReconsume(final MessageQueue mq, final ProcessQueue processQueue,
+        final long delayMills) {
+        this.scheduledExecutorService.schedule(new Runnable() {
+            @Override
+            public void run() {
+                boolean lockOK = ConsumeMessageStagedConcurrentlyService.this.lockOneMQ(mq);
+                if (lockOK) {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 10);
+                } else {
+                    ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, mq, 3000);
+                }
+            }
+        }, delayMills, TimeUnit.MILLISECONDS);
+    }
+
+    public synchronized boolean lockOneMQ(final MessageQueue mq) {
+        if (!this.stopped) {
+            return this.defaultMQPushConsumerImpl.getRebalanceImpl().lock(mq);
+        }
+
+        return false;
+    }
+
+    private void submitConsumeRequestLater(
+        final ProcessQueue processQueue,
+        final MessageQueue messageQueue,
+        final long suspendTimeMillis
+    ) {
+        long timeMillis = suspendTimeMillis;
+        if (timeMillis == -1) {
+            timeMillis = this.defaultMQPushConsumer.getSuspendCurrentQueueTimeMillis();
+        }
+
+        if (timeMillis < 10) {
+            timeMillis = 10;
+        } else if (timeMillis > 30000) {
+            timeMillis = 30000;
+        }
+
+        this.scheduledExecutorService.schedule(new Runnable() {
+
+            @Override
+            public void run() {
+                ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequest(null, processQueue, messageQueue, true);
+            }
+        }, timeMillis, TimeUnit.MILLISECONDS);
+    }
+
+    public boolean processConsumeResult(
+        final String strategyId,
+        final String groupId,
+        final List<MessageExt> msgs,
+        final ConsumeOrderlyStatus status,
+        final ConsumeStagedConcurrentlyContext context,
+        final ConsumeRequest consumeRequest
+    ) {
+        MessageQueue messageQueue = consumeRequest.getMessageQueue();
+        String topic = messageQueue.getTopic();
+        AtomicInteger currentStageOffset = getCurrentStageOffset(messageQueue, topic, strategyId, groupId);
+        boolean continueConsume = true;
+        long commitOffset = -1L;
+        int commitStageOffset = -1;
+        if (context.isAutoCommit()) {
+            switch (status) {
+                case COMMIT:
+                case ROLLBACK:
+                    log.warn("the message queue consume result is illegal, we think you want to ack these message {}",
+                        messageQueue);
+                case SUCCESS:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    } else {
+                        commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                        commitStageOffset = currentStageOffset.get();
+                    }
+                    break;
+                default:
+                    break;
+            }
+        } else {
+            switch (status) {
+                case SUCCESS:
+                    this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, topic, msgs.size());
+                    break;
+                case COMMIT:
+                    commitOffset = consumeRequest.getProcessQueue().commitMessages(msgs);
+                    commitStageOffset = currentStageOffset.get();
+                    break;
+                case ROLLBACK:
+                    consumeRequest.getProcessQueue().rollback();
+                    this.submitConsumeRequestLater(
+                        consumeRequest.getProcessQueue(),
+                        messageQueue,
+                        context.getSuspendCurrentQueueTimeMillis());
+                    continueConsume = false;
+                    break;
+                case SUSPEND_CURRENT_QUEUE_A_MOMENT:
+                    synchronized (currentStageOffset) {
+                        currentStageOffset.set(currentStageOffset.get() - msgs.size());
+                    }
+                    this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, topic, msgs.size());
+                    if (checkReconsumeTimes(msgs)) {
+                        consumeRequest.getProcessQueue().makeMessageToConsumeAgain(msgs);
+                        this.submitConsumeRequestLater(
+                            consumeRequest.getProcessQueue(),
+                            messageQueue,
+                            context.getSuspendCurrentQueueTimeMillis());
+                        continueConsume = false;
+                    }
+                    break;
+                default:
+                    break;
+            }
+        }
+
+        if (commitOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(messageQueue, commitOffset, false);
+        }
+
+        if (stageOffsetStore != null && commitStageOffset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
+            synchronized (currentStageOffset) {
+                messageListener.rollbackCurrentStageOffsetIfNeed(topic, strategyId, groupId, currentStageOffset, msgs);
+                //prevent users from resetting the value of currentStageOffset to a value less than 0
+                currentStageOffset.set(Math.max(0, currentStageOffset.get()));
+            }
+            commitStageOffset = currentStageOffset.get();
+            if (!consumeRequest.getProcessQueue().isDropped()) {
+                stageOffsetStore.updateStageOffset(messageQueue, strategyId, groupId, commitStageOffset, false);
+            }
+        }
+
+        return continueConsume;
+    }
+
+    public ConsumerStatsManager getConsumerStatsManager() {
+        return this.defaultMQPushConsumerImpl.getConsumerStatsManager();
+    }
+
+    private int getMaxReconsumeTimes() {
+        // default reconsume times: Integer.MAX_VALUE
+        if (this.defaultMQPushConsumer.getMaxReconsumeTimes() == -1) {
+            return Integer.MAX_VALUE;
+        } else {
+            return this.defaultMQPushConsumer.getMaxReconsumeTimes();
+        }
+    }
+
+    private boolean checkReconsumeTimes(List<MessageExt> msgs) {
+        boolean suspend = false;
+        if (msgs != null && !msgs.isEmpty()) {
+            for (MessageExt msg : msgs) {
+                if (msg.getReconsumeTimes() >= getMaxReconsumeTimes()) {
+                    MessageAccessor.setReconsumeTime(msg, String.valueOf(msg.getReconsumeTimes()));
+                    if (!sendMessageBack(msg)) {
+                        suspend = true;
+                        msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                    }
+                } else {
+                    suspend = true;
+                    msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
+                }
+            }
+        }
+        return suspend;
+    }
+
+    public boolean sendMessageBack(final MessageExt msg) {
+        try {
+            // max reconsume times exceeded then send to dead letter queue.
+            Message newMsg = new Message(MixAll.getRetryTopic(this.defaultMQPushConsumer.getConsumerGroup()), msg.getBody());
+            String originMsgId = MessageAccessor.getOriginMessageId(msg);
+            MessageAccessor.setOriginMessageId(newMsg, UtilAll.isBlank(originMsgId) ? msg.getMsgId() : originMsgId);
+            newMsg.setFlag(msg.getFlag());
+            MessageAccessor.setProperties(newMsg, msg.getProperties());
+            MessageAccessor.putProperty(newMsg, MessageConst.PROPERTY_RETRY_TOPIC, msg.getTopic());
+            MessageAccessor.setReconsumeTime(newMsg, String.valueOf(msg.getReconsumeTimes()));
+            MessageAccessor.setMaxReconsumeTimes(newMsg, String.valueOf(getMaxReconsumeTimes()));
+            MessageAccessor.clearProperty(newMsg, MessageConst.PROPERTY_TRANSACTION_PREPARED);
+            newMsg.setDelayTimeLevel(3 + msg.getReconsumeTimes());
+
+            this.defaultMQPushConsumer.getDefaultMQPushConsumerImpl().getmQClientFactory().getDefaultMQProducer().send(newMsg);
+            return true;
+        } catch (Exception e) {
+            log.error("sendMessageBack exception, group: " + this.consumerGroup + " msg: " + msg.toString(), e);
+        }
+
+        return false;
+    }
+
+    public void resetNamespace(final List<MessageExt> msgs) {
+        for (MessageExt msg : msgs) {
+            if (StringUtils.isNotEmpty(this.defaultMQPushConsumer.getNamespace())) {
+                msg.setTopic(NamespaceUtil.withoutNamespace(msg.getTopic(), this.defaultMQPushConsumer.getNamespace()));
+            }
+        }
+    }
+
+    class DispatchRequest implements Runnable {
+        private final ProcessQueue processQueue;
+        private final MessageQueue messageQueue;
+
+        public DispatchRequest(ProcessQueue processQueue,
+            MessageQueue messageQueue) {
+            this.processQueue = processQueue;
+            this.messageQueue = messageQueue;
+        }
+
+        @Override
+        public void run() {
+            if (this.processQueue.isDropped()) {
+                log.warn("run, the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                return;
+            }
+
+            String topic = this.messageQueue.getTopic();
+            final Object objLock = messageQueueLock.fetchLockObject(this.messageQueue);
+            synchronized (objLock) {
+                if (MessageModel.BROADCASTING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                    || (this.processQueue.isLocked() && !this.processQueue.isLockExpired())) {
+                    final long beginTime = System.currentTimeMillis();
+                    for (final AtomicBoolean continueConsume = new AtomicBoolean(true); continueConsume.get(); ) {
+                        if (this.processQueue.isDropped()) {
+                            log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && !this.processQueue.isLocked()) {
+                            log.warn("the message queue not locked, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())
+                            && this.processQueue.isLockExpired()) {
+                            log.warn("the message queue lock expired, so consume later, {}", this.messageQueue);
+                            ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10);
+                            break;
+                        }
+
+                        long interval = System.currentTimeMillis() - beginTime;
+                        if (interval > MAX_TIME_CONSUME_CONTINUOUSLY) {
+                            ConsumeMessageStagedConcurrentlyService.this.submitConsumeRequestLater(processQueue, messageQueue, 10);
+                            break;
+                        }
+
+                        final int consumeBatchSize =
+                            ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
+                        int takeSize = ConsumeMessageStagedConcurrentlyService.this.pullBatchSize * consumeBatchSize;
+                        List<MessageExt> msgs = this.processQueue.takeMessages(takeSize);
+                        if (!msgs.isEmpty()) {
+                            //ensure that the stage definitions is up to date
+                            ConsumeMessageStagedConcurrentlyService.this.refreshStageDefinition();
+                            Map<String, Map<String, List<MessageExt>>> messageGroupByStrategyThenGroup = removeAndRePutAllMessagesInTheNextStage(topic, msgs);
+                            for (Map.Entry<String, Map<String, List<MessageExt>>> entry : messageGroupByStrategyThenGroup.entrySet()) {
+                                String strategyId = entry.getKey();
+                                Map<String, List<MessageExt>> messageGroups = entry.getValue();
+                                for (Map.Entry<String, List<MessageExt>> innerEntry : messageGroups.entrySet()) {
+                                    String groupId = innerEntry.getKey();
+                                    List<MessageExt> messagesCanConsume = innerEntry.getValue();
+                                    List<List<MessageExt>> lists = UtilAll.partition(messagesCanConsume, consumeBatchSize);
+                                    for (final List<MessageExt> list : lists) {
+                                        defaultMQPushConsumerImpl.resetRetryAndNamespace(list, defaultMQPushConsumer.getConsumerGroup());
+                                        int currentLeftoverStageIndex =
+                                            ConsumeMessageStagedConcurrentlyService.this.getCurrentLeftoverStageIndexAndUpdate(this.messageQueue, topic, strategyId, groupId, list.size());
+                                        ConsumeRequest consumeRequest = new ConsumeRequest(list, this.processQueue, this.messageQueue, continueConsume, currentLeftoverStageIndex, strategyId, groupId);
+                                        if (currentLeftoverStageIndex >= 0) {
+                                            engine.runPriorityAsync(currentLeftoverStageIndex, consumeRequest);
+                                        } else {
+                                            //If the strategy Id is null, it will go in this case
+                                            engine.runPriorityAsync(consumeRequest);
+                                        }
+                                    }
+                                }
+                            }
+                        } else {
+                            continueConsume.set(false);
+                        }
+                    }
+                } else {
+                    if (this.processQueue.isDropped()) {
+                        log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
+                        return;
+                    }
+
+                    ConsumeMessageStagedConcurrentlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 100);
+                }
+            }
+        }
+
+        private Map<String, Map<String, List<MessageExt>>> removeAndRePutAllMessagesInTheNextStage(String topic,
+            List<MessageExt> msgs) {
+            Map<String, Map<String, List<MessageExt>>> messageGroupByStrategyThenGroup = new LinkedHashMap<>();
+            for (MessageExt message : msgs) {
+                String strategyId = NULL;
+                try {
+                    strategyId = String.valueOf(messageListener.computeStrategy(message));
+                } catch (Exception e) {
+                    log.error("computeStrategy failed with exception:" + e.getMessage() + " !");
+                }
+                String groupId = NULL;
+                try {
+                    groupId = String.valueOf(messageListener.computeGroup(message));
+                } catch (Exception e) {
+                    log.error("computeGroup failed with exception:" + e.getMessage() + " !");
+                }
+                //null strategy means direct concurrency
+                Map<String, List<MessageExt>> messageGroupByStrategy = messageGroupByStrategyThenGroup.putIfAbsent(strategyId, new LinkedHashMap<>());
+                if (null == messageGroupByStrategy) {
+                    messageGroupByStrategy = messageGroupByStrategyThenGroup.get(strategyId);
+                }
+                List<MessageExt> messages = messageGroupByStrategy.putIfAbsent(groupId, new CopyOnWriteArrayList<>());
+                if (null == messages) {
+                    messages = messageGroupByStrategy.get(groupId);
+                }
+                messages.add(message);
+            }
+            for (Map.Entry<String, Map<String, List<MessageExt>>> entry : messageGroupByStrategyThenGroup.entrySet()) {
+                String strategyId = entry.getKey();
+                Map<String, List<MessageExt>> messageGroupByStrategy = entry.getValue();
+                for (Map.Entry<String, List<MessageExt>> innerEntry : messageGroupByStrategy.entrySet()) {
+                    String groupId = innerEntry.getKey();
+                    List<MessageExt> messages = innerEntry.getValue();
+                    int leftoverStage = ConsumeMessageStagedConcurrentlyService.this.getCurrentLeftoverStage(this.messageQueue, topic, strategyId, groupId);
+                    int size = messages.size();
+                    if (leftoverStage < 0 || size <= leftoverStage) {
+                        continue;
+                    }
+                    List<MessageExt> list = messages.subList(leftoverStage, size);
+                    //the messages must be put back here
+                    this.processQueue.putMessage(list);
+                    messages.removeAll(list);
+                }
+            }
+            return messageGroupByStrategyThenGroup;
+        }
+    }
+
+    class ConsumeRequest implements Runnable {
+        private final List<MessageExt> msgs;
+        private final ProcessQueue processQueue;
+        private final MessageQueue messageQueue;
+        private final AtomicBoolean continueConsume;
+        private final int currentLeftoverStageIndex;
+        private final String strategyId;
+        private final String groupId;
+
+        public ConsumeRequest(List<MessageExt> msgs,
+            ProcessQueue processQueue,
+            MessageQueue messageQueue,
+            AtomicBoolean continueConsume,
+            int currentLeftoverStage,
+            String strategyId,
+            String groupId) {
+            this.msgs = msgs;
+            this.processQueue = processQueue;
+            this.messageQueue = messageQueue;
+            this.continueConsume = continueConsume;
+            this.currentLeftoverStageIndex = currentLeftoverStage;
+            this.strategyId = strategyId;
+            this.groupId = groupId;
+        }
+
+        public ProcessQueue getProcessQueue() {
+            return processQueue;
+        }
+
+        public MessageQueue getMessageQueue() {
+            return messageQueue;
+        }
+
+        @Override
+        public void run() {
+            ConsumeStagedConcurrentlyContext context = new ConsumeStagedConcurrentlyContext(this.messageQueue);
+            context.setStrategyId(strategyId);
+            context.setGroupId(groupId);
+            context.setStageIndex(currentLeftoverStageIndex);
+            ConsumeOrderlyStatus status = null;
+
+            ConsumeMessageContext consumeMessageContext = null;
+            if (ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
+                consumeMessageContext = new ConsumeMessageContext();
+                consumeMessageContext
+                    .setConsumerGroup(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumer.getConsumerGroup());
+                consumeMessageContext.setNamespace(defaultMQPushConsumer.getNamespace());
+                consumeMessageContext.setMq(messageQueue);
+                consumeMessageContext.setMsgList(msgs);
+                consumeMessageContext.setSuccess(false);
+                // init the consume context type
+                consumeMessageContext.setProps(new HashMap<String, String>());
+                ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.executeHookBefore(consumeMessageContext);
+            }
+
+            long beginTimestamp = System.currentTimeMillis();
+            ConsumeReturnType returnType = ConsumeReturnType.SUCCESS;
+            boolean hasException = false;
+            try {
+                this.processQueue.getConsumeLock().lock();
+                if (this.processQueue.isDropped()) {
+                    log.warn("consumeMessage, the message queue not be able to consume, because it's dropped. {}",
+                        this.messageQueue);
+                    continueConsume.set(false);
+                    return;
+                }
+                for (MessageExt msg : msgs) {
+                    MessageAccessor.setConsumeStartTimeStamp(msg, String.valueOf(System.currentTimeMillis()));
+                }
+                status = messageListener.consumeMessage(Collections.unmodifiableList(msgs), context);
+            } catch (Throwable e) {
+                log.warn("consumeMessage exception: {} Group: {} Msgs: {} MQ: {}",
+                    RemotingHelper.exceptionSimpleDesc(e),
+                    ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                    msgs,
+                    messageQueue);
+                hasException = true;
+            } finally {
+                this.processQueue.getConsumeLock().unlock();
+            }
+
+            if (null == status
+                || ConsumeOrderlyStatus.ROLLBACK == status
+                || ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT == status) {
+                log.warn("consumeMessage Orderly return not OK, Group: {} Msgs: {} MQ: {}",
+                    ConsumeMessageStagedConcurrentlyService.this.consumerGroup,
+                    msgs,
+                    messageQueue);
+            }
+
+            long consumeRT = System.currentTimeMillis() - beginTimestamp;
+            if (null == status) {
+                if (hasException) {
+                    returnType = ConsumeReturnType.EXCEPTION;
+                } else {
+                    returnType = ConsumeReturnType.RETURNNULL;
+                }
+            } else if (consumeRT >= defaultMQPushConsumer.getConsumeTimeout() * 60 * 1000) {
+                returnType = ConsumeReturnType.TIME_OUT;
+            } else if (ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT == status) {
+                returnType = ConsumeReturnType.FAILED;
+            } else if (ConsumeOrderlyStatus.SUCCESS == status) {
+                returnType = ConsumeReturnType.SUCCESS;
+            }
+
+            if (ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
+                consumeMessageContext.getProps().put(MixAll.CONSUME_CONTEXT_TYPE, returnType.name());
+            }
+
+            if (null == status) {
+                status = ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT;
+            }
+
+            if (ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
+                consumeMessageContext.setStatus(status.toString());
+                consumeMessageContext
+                    .setSuccess(ConsumeOrderlyStatus.SUCCESS == status || ConsumeOrderlyStatus.COMMIT == status);
+                ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.executeHookAfter(consumeMessageContext);
+            }
+
+            ConsumeMessageStagedConcurrentlyService.this.getConsumerStatsManager()
+                .incConsumeRT(ConsumeMessageStagedConcurrentlyService.this.consumerGroup, messageQueue.getTopic(), consumeRT);
+            continueConsume.set(ConsumeMessageStagedConcurrentlyService.this.processConsumeResult(strategyId, groupId, msgs, status, context, this)

Review comment:
       Good idea, this makes the code easier to understand

##########
File path: common/src/main/java/org/apache/rocketmq/common/message/MessageClientExt.java
##########
@@ -36,7 +36,7 @@ public String getMsgId() {
         }
     }
 
-    public void setMsgId(String msgId) {
+    @Override public void setMsgId(String msgId) {

Review comment:
       ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] ifplusor commented on a change in pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
ifplusor commented on a change in pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#discussion_r730326368



##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {

Review comment:
       because `start` is member of `ConsumeMessageStagedConcurrentlyService`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] dragon-zhang commented on a change in pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
dragon-zhang commented on a change in pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#discussion_r730393996



##########
File path: client/src/main/java/org/apache/rocketmq/client/impl/consumer/ConsumeMessageStagedConcurrentlyService.java
##########
@@ -0,0 +1,872 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.rocketmq.client.impl.consumer;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
+import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
+import org.apache.rocketmq.client.consumer.listener.ConsumeReturnType;
+import org.apache.rocketmq.client.consumer.listener.ConsumeStagedConcurrentlyContext;
+import org.apache.rocketmq.client.consumer.listener.MessageListenerStagedConcurrently;
+import org.apache.rocketmq.client.consumer.store.ReadOffsetType;
+import org.apache.rocketmq.client.consumer.store.StageOffsetStore;
+import org.apache.rocketmq.client.hook.ConsumeMessageContext;
+import org.apache.rocketmq.client.log.ClientLogger;
+import org.apache.rocketmq.client.stat.ConsumerStatsManager;
+import org.apache.rocketmq.common.MixAll;
+import org.apache.rocketmq.common.ThreadFactoryImpl;
+import org.apache.rocketmq.common.UtilAll;
+import org.apache.rocketmq.common.concurrent.PriorityConcurrentEngine;
+import org.apache.rocketmq.common.message.Message;
+import org.apache.rocketmq.common.message.MessageAccessor;
+import org.apache.rocketmq.common.message.MessageConst;
+import org.apache.rocketmq.common.message.MessageExt;
+import org.apache.rocketmq.common.message.MessageQueue;
+import org.apache.rocketmq.common.protocol.NamespaceUtil;
+import org.apache.rocketmq.common.protocol.body.CMResult;
+import org.apache.rocketmq.common.protocol.body.ConsumeMessageDirectlyResult;
+import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
+import org.apache.rocketmq.common.utils.ThreadUtils;
+import org.apache.rocketmq.logging.InternalLogger;
+import org.apache.rocketmq.remoting.common.RemotingHelper;
+
+public class ConsumeMessageStagedConcurrentlyService implements ConsumeMessageService {
+    private static final String NULL = "null";
+    private static final InternalLogger log = ClientLogger.getLog();
+    private final static long MAX_TIME_CONSUME_CONTINUOUSLY =
+        Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000"));
+    private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl;
+    private final DefaultMQPushConsumer defaultMQPushConsumer;
+    private final MessageListenerStagedConcurrently messageListener;
+    private final BlockingQueue<Runnable> consumeRequestQueue;
+    private final ThreadPoolExecutor dispatchExecutor;
+    private final ThreadPoolExecutor consumeExecutor;
+    private final PriorityConcurrentEngine engine;
+    private final String consumerGroup;
+    private final MessageQueueLock messageQueueLock = new MessageQueueLock();
+    private final ScheduledExecutorService scheduledExecutorService;
+    private volatile boolean stopped = false;
+    private final Map<String/*strategyId*/, List<Integer>/*StageDefinition*/> summedStageDefinitionMap;
+    private final ConcurrentMap<String/*topic*/, ConcurrentMap<String/*strategyId*/, ConcurrentMap<String/*groupId*/, AtomicInteger/*currentStageOffset*/>>> currentStageOffsetMap = new ConcurrentHashMap<>();
+    private final int pullBatchSize;
+    private final StageOffsetStore stageOffsetStore;
+
+    public ConsumeMessageStagedConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
+        MessageListenerStagedConcurrently messageListener) {
+        this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
+        this.messageListener = messageListener;
+        this.summedStageDefinitionMap = new ConcurrentHashMap<>();
+        this.refreshStageDefinition();
+
+        this.stageOffsetStore = this.defaultMQPushConsumerImpl.getStageOffsetStore();
+
+        this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
+        this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
+        this.pullBatchSize = this.defaultMQPushConsumer.getPullBatchSize();
+        this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
+
+        int consumeThreadMin = this.defaultMQPushConsumer.getConsumeThreadMin();
+        int consumeThreadMax = this.defaultMQPushConsumer.getConsumeThreadMax();
+        this.dispatchExecutor = new ThreadPoolExecutor(
+            (int) Math.ceil(consumeThreadMin * 1.0 / this.pullBatchSize),
+            (int) Math.ceil(consumeThreadMax * 1.0 / this.pullBatchSize),
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            new LinkedBlockingQueue<Runnable>(),
+            new ThreadFactoryImpl("DispatchMessageThread_"));
+        // when the number of threads is equal to
+        // the topic consumeQueue size multiplied by this.pullBatchSize,
+        // good performance can be obtained
+        this.consumeExecutor = new ThreadPoolExecutor(
+            consumeThreadMin,
+            consumeThreadMax,
+            1000 * 60,
+            TimeUnit.MILLISECONDS,
+            this.consumeRequestQueue,
+            new ThreadFactoryImpl("ConsumeMessageThread_"));
+        engine = new PriorityConcurrentEngine(this.consumeExecutor);
+
+        this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_"));
+    }
+
+    private void refreshStageDefinition() {
+        Map<String, List<Integer>> strategies = messageListener.getStageDefinitionStrategies();
+        if (MapUtils.isNotEmpty(strategies)) {
+            for (Map.Entry<String, List<Integer>> entry : strategies.entrySet()) {
+                String strategyId = entry.getKey();
+                List<Integer> definitions = entry.getValue();
+                List<Integer> summedStageDefinitions = new ArrayList<>();
+                if (definitions != null) {
+                    int sum = 0;
+                    for (Integer stageDefinition : definitions) {
+                        summedStageDefinitions.add(sum = sum + stageDefinition);
+                    }
+                }
+                summedStageDefinitionMap.put(strategyId, summedStageDefinitions);
+            }
+        }
+    }
+
+    @Override
+    public void start() {
+        engine.start();
+        if (MessageModel.CLUSTERING.equals(ConsumeMessageStagedConcurrentlyService.this.defaultMQPushConsumerImpl.messageModel())) {
+            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
+                @Override
+                public void run() {
+                    ConsumeMessageStagedConcurrentlyService.this.lockMQPeriodically();
+                }
+            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
+        }
+    }
+
+    @Override
+    public void shutdown(long awaitTerminateMillis) {
+        this.stopped = true;
+        this.scheduledExecutorService.shutdown();
+        ThreadUtils.shutdownGracefully(this.dispatchExecutor, awaitTerminateMillis, TimeUnit.MILLISECONDS);
+        engine.shutdown(awaitTerminateMillis);
+        if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) {
+            this.unlockAllMQ();
+        }
+    }
+
+    public synchronized void unlockAllMQ() {
+        this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false);
+    }
+
+    public AtomicInteger getCurrentStageOffset(MessageQueue messageQueue, String topic, String strategyId,
+        String groupId) {
+        if (null == strategyId || NULL.equals(strategyId)) {
+            return new AtomicInteger(-1);
+        }
+        groupId = String.valueOf(groupId);
+        ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> groupByStrategy = currentStageOffsetMap.get(topic);
+        if (null == groupByStrategy) {
+            ConcurrentMap<String, ConcurrentMap<String, AtomicInteger>> stageOffset = stageOffsetStore == null ?
+                new ConcurrentHashMap<>() : convert(stageOffsetStore.readStageOffset(messageQueue, ReadOffsetType.MEMORY_FIRST_THEN_STORE));
+            currentStageOffsetMap.putIfAbsent(topic, stageOffset);

Review comment:
       ```java
       public void submitConsumeRequest(
           final List<MessageExt> msgs,
           final ProcessQueue processQueue,
           final MessageQueue messageQueue,
           final boolean dispatchToConsume) {
           if (dispatchToConsume) {
               //"processQueue" can be different in one topic
               DispatchRequest dispatchRequest = new DispatchRequest(processQueue, messageQueue);
               this.dispatchExecutor.submit(dispatchRequest);
           }
       }
   //line 675
   int currentLeftoverStageIndex =
   //"this.processQueue" can be different in one topic
   ConsumeMessageStagedConcurrentlyService.this.getCurrentLeftoverStageIndexAndUpdate(this.messageQueue, topic, strategyId, groupId, list.size());
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40382364/badge)](https://coveralls.io/builds/40382364)
   
   Coverage decreased (-0.5%) to 53.458% when pulling **d3f7136b27635a74e786ac440f7eb94c77142616 on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **a1babab507934e81f0e05b2867566c8b459be341 on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479


   # [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#2983](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (760962b) into [develop](https://codecov.io/gh/apache/rocketmq/commit/a2f8810c9adedcd82fd4cb9a69b17128a1a96b5e?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a2f8810) will **increase** coverage by `0.32%`.
   > The diff coverage is `28.85%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/rocketmq/pull/2983/graphs/tree.svg?width=650&height=150&src=pr&token=4w0sxP1wZv&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   
   ```diff
   @@              Coverage Diff              @@
   ##             develop    #2983      +/-   ##
   =============================================
   + Coverage      47.92%   48.25%   +0.32%     
   + Complexity      4559     3716     -843     
   =============================================
     Files            552      320     -232     
     Lines          36633    30326    -6307     
     Branches        4845     4335     -510     
   =============================================
   - Hits           17558    14635    -2923     
   + Misses         16854    13683    -3171     
   + Partials        2221     2008     -213     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...ocketmq/broker/processor/AdminBrokerProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0FkbWluQnJva2VyUHJvY2Vzc29yLmphdmE=) | `7.93% <0.00%> (-0.03%)` | :arrow_down: |
   | [...etmq/broker/processor/ConsumerManageProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0NvbnN1bWVyTWFuYWdlUHJvY2Vzc29yLmphdmE=) | `4.25% <0.00%> (-1.63%)` | :arrow_down: |
   | [...ocketmq/client/consumer/DefaultMQPushConsumer.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvRGVmYXVsdE1RUHVzaENvbnN1bWVyLmphdmE=) | `53.73% <0.00%> (-0.82%)` | :arrow_down: |
   | [...g/apache/rocketmq/client/impl/MQClientAPIImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9NUUNsaWVudEFQSUltcGwuamF2YQ==) | `11.97% <0.00%> (-0.22%)` | :arrow_down: |
   | [...he/rocketmq/client/impl/consumer/ProcessQueue.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Qcm9jZXNzUXVldWUuamF2YQ==) | `55.84% <0.00%> (-3.70%)` | :arrow_down: |
   | [...cketmq/client/impl/consumer/RebalancePushImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9SZWJhbGFuY2VQdXNoSW1wbC5qYXZh) | `34.23% <0.00%> (-1.28%)` | :arrow_down: |
   | [...etmq/broker/offset/ConsumerStageOffsetManager.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvb2Zmc2V0L0NvbnN1bWVyU3RhZ2VPZmZzZXRNYW5hZ2VyLmphdmE=) | `11.59% <11.59%> (ø)` | |
   | [...lient/impl/consumer/DefaultMQPushConsumerImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9EZWZhdWx0TVFQdXNoQ29uc3VtZXJJbXBsLmphdmE=) | `39.41% <11.76%> (-0.76%)` | :arrow_down: |
   | [...sumer/ConsumeMessageStagedConcurrentlyService.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Db25zdW1lTWVzc2FnZVN0YWdlZENvbmN1cnJlbnRseVNlcnZpY2UuamF2YQ==) | `17.05% <17.05%> (ø)` | |
   | [...a/org/apache/rocketmq/broker/BrokerController.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvQnJva2VyQ29udHJvbGxlci5qYXZh) | `44.83% <41.66%> (-0.07%)` | :arrow_down: |
   | ... and [265 more](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [a2f8810...760962b](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40338818/badge)](https://coveralls.io/builds/40338818)
   
   Coverage decreased (-0.6%) to 53.362% when pulling **c01637d2e2987bd0c0d24b2df9879bb0022e4b85 on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **a1babab507934e81f0e05b2867566c8b459be341 on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40989758/badge)](https://coveralls.io/builds/40989758)
   
   Coverage decreased (-0.8%) to 53.291% when pulling **9845110d5ac8a7b5b8aeb6055ee1bac45c2bc188 on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **df35edf3d5cc9b5b497c5158912dd81f3e6e2104 on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] coveralls edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
coveralls edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214461


   
   [![Coverage Status](https://coveralls.io/builds/40482206/badge)](https://coveralls.io/builds/40482206)
   
   Coverage decreased (-0.8%) to 53.35% when pulling **817addd10c85615b2a97220a570c3aed96642cbb on dragon-zhang:dev_periodic_concurrent_consumer_support2** into **df1d93fc8859377b92ba87c6947911281656f355 on apache:develop**.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] codecov-commenter edited a comment on pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983#issuecomment-855214479


   # [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#2983](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (9845110) into [develop](https://codecov.io/gh/apache/rocketmq/commit/df35edf3d5cc9b5b497c5158912dd81f3e6e2104?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (df35edf) will **increase** coverage by `0.29%`.
   > The diff coverage is `28.85%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/rocketmq/pull/2983/graphs/tree.svg?width=650&height=150&src=pr&token=4w0sxP1wZv&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   
   ```diff
   @@              Coverage Diff              @@
   ##             develop    #2983      +/-   ##
   =============================================
   + Coverage      47.98%   48.28%   +0.29%     
   + Complexity      4561     3716     -845     
   =============================================
     Files            552      320     -232     
     Lines          36633    30326    -6307     
     Branches        4845     4335     -510     
   =============================================
   - Hits           17578    14642    -2936     
   + Misses         16827    13674    -3153     
   + Partials        2228     2010     -218     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...ocketmq/broker/processor/AdminBrokerProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0FkbWluQnJva2VyUHJvY2Vzc29yLmphdmE=) | `7.93% <0.00%> (-0.03%)` | :arrow_down: |
   | [...etmq/broker/processor/ConsumerManageProcessor.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvcHJvY2Vzc29yL0NvbnN1bWVyTWFuYWdlUHJvY2Vzc29yLmphdmE=) | `4.25% <0.00%> (-1.63%)` | :arrow_down: |
   | [...ocketmq/client/consumer/DefaultMQPushConsumer.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvY29uc3VtZXIvRGVmYXVsdE1RUHVzaENvbnN1bWVyLmphdmE=) | `53.73% <0.00%> (-0.82%)` | :arrow_down: |
   | [...g/apache/rocketmq/client/impl/MQClientAPIImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9NUUNsaWVudEFQSUltcGwuamF2YQ==) | `11.97% <0.00%> (-0.22%)` | :arrow_down: |
   | [...he/rocketmq/client/impl/consumer/ProcessQueue.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Qcm9jZXNzUXVldWUuamF2YQ==) | `55.41% <0.00%> (-4.59%)` | :arrow_down: |
   | [...cketmq/client/impl/consumer/RebalancePushImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9SZWJhbGFuY2VQdXNoSW1wbC5qYXZh) | `34.23% <0.00%> (-1.28%)` | :arrow_down: |
   | [...etmq/broker/offset/ConsumerStageOffsetManager.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvb2Zmc2V0L0NvbnN1bWVyU3RhZ2VPZmZzZXRNYW5hZ2VyLmphdmE=) | `11.59% <11.59%> (ø)` | |
   | [...lient/impl/consumer/DefaultMQPushConsumerImpl.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9EZWZhdWx0TVFQdXNoQ29uc3VtZXJJbXBsLmphdmE=) | `39.41% <11.76%> (-0.76%)` | :arrow_down: |
   | [...sumer/ConsumeMessageStagedConcurrentlyService.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-Y2xpZW50L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9jbGllbnQvaW1wbC9jb25zdW1lci9Db25zdW1lTWVzc2FnZVN0YWdlZENvbmN1cnJlbnRseVNlcnZpY2UuamF2YQ==) | `17.05% <17.05%> (ø)` | |
   | [...a/org/apache/rocketmq/broker/BrokerController.java](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-YnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9yb2NrZXRtcS9icm9rZXIvQnJva2VyQ29udHJvbGxlci5qYXZh) | `44.83% <41.66%> (-0.07%)` | :arrow_down: |
   | ... and [263 more](https://codecov.io/gh/apache/rocketmq/pull/2983/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [df35edf...9845110](https://codecov.io/gh/apache/rocketmq/pull/2983?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [rocketmq] dragon-zhang closed pull request #2983: [RIP-22] RocketMQ Stage Message Consumer Part

Posted by GitBox <gi...@apache.org>.
dragon-zhang closed pull request #2983:
URL: https://github.com/apache/rocketmq/pull/2983


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@rocketmq.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org