You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@rocketmq.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2017/02/07 06:21:41 UTC

[jira] [Commented] (ROCKETMQ-80) Add batch feature

    [ https://issues.apache.org/jira/browse/ROCKETMQ-80?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15855411#comment-15855411 ] 

ASF GitHub Bot commented on ROCKETMQ-80:
----------------------------------------

GitHub user dongeforever opened a pull request:

    https://github.com/apache/incubator-rocketmq/pull/53

    [ROCKETMQ-80] Add batch feature

    Tests show that Kafka's million-level TPS is mainly owed to batch. When set batch size to 1, the TPS is reduced an order of magnitude. So I try to add this feature to RocketMQ.
    
    For a minimal effort, it works as follows:
     
    * Only add synchronous send functions to MQProducer interface, just like **send(final Collection<Message> msgs)**
    * Use **MessageBatch** which extends **Message** and implements **Iterable\<Message\>**
    * Use byte buffer instead of list of objects to avoid too much GC in Broker.
    * Split the decode and encode logic from **lockForPutMessage** to avoid too many race conditions.
    
    Tests:
     On linux with 24 Core 48G Ram and SSD, single broker, using 50 threads to send 50Byte(body) message in batch size 50, we get about 150w TPS until the disk is full.
     
    
    Potential problems:
    Although the messages can be accumulated in the Broker very quickly, it need time to dispatch to the consume queue, which is much slower than accepting messages. So the messages may not be able to be consumed immediately.
    
    We may need to refactor the **ReputMessageService** to solve this problem.
    
    And if guys have some ideas, please let me know or just share it in this issue.
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/dongeforever/incubator-rocketmq ROCKETMQ-80

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/incubator-rocketmq/pull/53.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #53
    
----
commit e03b6e6a496526848df603fd406b77aa6afc87d2
Author: dongeforever <zh...@yeah.net>
Date:   2017-02-07T06:12:16Z

    [ROCKETMQ-80] Add batch feature

----


> Add batch feature
> -----------------
>
>                 Key: ROCKETMQ-80
>                 URL: https://issues.apache.org/jira/browse/ROCKETMQ-80
>             Project: Apache RocketMQ
>          Issue Type: New Feature
>    Affects Versions: 4.1.0-incubating
>            Reporter: zander
>            Assignee: zander
>             Fix For: 4.1.0-incubating
>
>
> Tests show that Kafka's million-level TPS is mainly owed to batch. When set batch size to 1, the TPS is reduced an order of magnitude. So I try to add this feature to RocketMQ.
> For a minimal effort, it works as follows:
> Only add synchronous send functions to MQProducer interface, just like send(final Collection msgs).
> Use MessageBatch which extends Message and implements Iterable<Message>.
> Use byte buffer instead of list of objects to avoid too much GC in Broker.
> Split the decode and encode logic from lockForPutMessage to avoid too many race conditions.
> Tests:
> On linux with 24 Core 48G Ram and SSD, using 50 threads to send 50Byte(body) message in batch size 50, we get about 150w TPS until the disk is full.
> Potential problems:
> Although the messages can be accumulated in the Broker very quickly, it need time to dispatch to the consume queue, which is much slower than accepting messages. So the messages may not be able to be consumed immediately.
> We may need to refactor the ReputMessageService to solve this problem.
> And if guys have some ideas, please let me know or just share it in this issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)