You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@storm.apache.org by nabhanelrahman <gi...@git.apache.org> on 2016/06/14 19:14:51 UTC

[GitHub] storm pull request #1487: off-by-one error for UNCOMMITTED_LATEST and UNCOMM...

GitHub user nabhanelrahman opened a pull request:

    https://github.com/apache/storm/pull/1487

    off-by-one error for UNCOMMITTED_LATEST and UNCOMMITTED_EARLIEST

    
    In the source code documentation, it says for both UNCOMMITTED_LATEST
    and UNCOMMITTED_EARLIEST it will fetch the last committed offset ( if it
    exists ).  But the code fetches last committed + 1;
    
    If a topology is killed/crashes and then is redeployed, this off-by-one
    error causes one message per partition to be dropped.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/publica-project/storm storm-kafka-client_fetch_offset

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/storm/pull/1487.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1487
    
----

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] storm issue #1487: off-by-one error for UNCOMMITTED_LATEST and UNCOMMITTED_E...

Posted by hmcl <gi...@git.apache.org>.
Github user hmcl commented on the issue:

    https://github.com/apache/storm/pull/1487
  
    @nabhanelrahman I am addressing this and other edge cases in a patch that I am about to submit. I am afraid that just removing the +1 won't work for all the cases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] storm pull request #1487: off-by-one error for UNCOMMITTED_LATEST and UNCOMM...

Posted by nabhanelrahman <gi...@git.apache.org>.
Github user nabhanelrahman closed the pull request at:

    https://github.com/apache/storm/pull/1487


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] storm issue #1487: off-by-one error for UNCOMMITTED_LATEST and UNCOMMITTED_E...

Posted by srdo <gi...@git.apache.org>.
Github user srdo commented on the issue:

    https://github.com/apache/storm/pull/1487
  
    You probably shouldn't enable auto commit if you can't accept some message loss. As I understand it, it makes the Kafka consumer commit the messages it has received from poll() every so often. That means that messages can be lost if they've been pulled out of the consumer, but the topology hasn't acked them yet. If you disable auto commit, the messages will instead be committed when the topology acks them.
    
    I can't reproduce this issue. Here are the steps I took:
    
    1) Do a fresh install of Kafka 0.9.0.1
    2) Do a fresh install of Storm 1.0.1, start Nimbus and 1 supervisor on localhost
    3) Create a 1 partition topic on a newly installed 0.9.0.1 Kafka broker
    4) Deploy a topology containing only the Kafka spout and a bolt printing the messages it receives to system.out.
    5) Send 5 messages to Kafka using the kafka-console-producer.sh script in kafka/bin
    6) Verify that the topology prints those messages
    7) Kill the topology and wait for 30 seconds for it to be completely shut down
    8) Send 5 more messages
    9) Redeploy
    10) Confirm that all 5 new messages are received
    
    Here's the code I used https://github.com/srdo/TestKafkaSpout


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] storm issue #1487: off-by-one error for UNCOMMITTED_LATEST and UNCOMMITTED_E...

Posted by srdo <gi...@git.apache.org>.
Github user srdo commented on the issue:

    https://github.com/apache/storm/pull/1487
  
    Could you check also if you see the issue when auto commit is set to false?
    
    Are you running your topology in local mode or deploying it with storm jar?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] storm issue #1487: off-by-one error for UNCOMMITTED_LATEST and UNCOMMITTED_E...

Posted by nabhanelrahman <gi...@git.apache.org>.
Github user nabhanelrahman commented on the issue:

    https://github.com/apache/storm/pull/1487
  
    I am about to reproduce the issue with auto-commit=true (i lose messages).  How do we reconcile auto-commit not working?  Do we hardcode auto-commit=false when creating the KafkaConsumer object for KafkaSpout?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] storm issue #1487: off-by-one error for UNCOMMITTED_LATEST and UNCOMMITTED_E...

Posted by srdo <gi...@git.apache.org>.
Github user srdo commented on the issue:

    https://github.com/apache/storm/pull/1487
  
    Are you sure this is the case? It looks to me like the spout is committing the offset of the latest acked tuple that doesn't have uncommitted predecessors. It seems to make sense that it seeks to the last committed offset +1, since that's the earliest unacked message.
    
    The API does suggest doing the +1 on commit instead, but I'd imagine that's just a matter of taste.
    
    "The committed offset should be the next message your application will consume, i.e. lastProcessedMessageOffset + 1." (see https://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html under commitSync(Map))


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] storm issue #1487: off-by-one error for UNCOMMITTED_LATEST and UNCOMMITTED_E...

Posted by nabhanelrahman <gi...@git.apache.org>.
Github user nabhanelrahman commented on the issue:

    https://github.com/apache/storm/pull/1487
  
    @srdo, 
    
    i have this property set:
    
     props.put(KafkaSpoutConfig.Consumer.ENABLE_AUTO_COMMIT, "true");
    
    Here are my steps:
    
    1) create 10 partition topic
    2) deploy topology
    3) send 10 msgs
    4) confirmed 10 msgs where received
    5) kill topology
    6) send 100 msgs
    7) re-deploy topology
    8) confirmed that 90 msgs were received ( was expecting 100 msgs).  that means the first message in each partition was skipped when the initialized was called.  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] storm issue #1487: off-by-one error for UNCOMMITTED_LATEST and UNCOMMITTED_E...

Posted by nabhanelrahman <gi...@git.apache.org>.
Github user nabhanelrahman commented on the issue:

    https://github.com/apache/storm/pull/1487
  
    @hmcl,  i am closing out this pull request because I am assuming that you will be addressing this issue in your patch.  
    
    In the meantime, do you think I will not have data loss if auto-commit is set to false?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---