You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "vamossagar12 (via GitHub)" <gi...@apache.org> on 2023/04/28 15:03:02 UTC

[GitHub] [kafka] vamossagar12 commented on pull request #13646: KAFKA-14938: Fixing flaky test testConnectorBoundary

vamossagar12 commented on PR #13646:
URL: https://github.com/apache/kafka/pull/13646#issuecomment-1527703722

   Yeah I agree with @yashmayya . Moreover this 
   
   ```
   Yes it will fail, but consumeAll is not failing due to timeout here but rather due to its nature of storing the end offsets before consuming.
   ```
   
   is not entirely correct i think. I agree what gets thrown in an AssertionError but thats because the number of sourceRecords returned by `consumeAll` didn't meet the desired number of records within 60s. For starters, can you try increasing `CONSUME_RECORDS_TIMEOUT_MS` to 100s or as such and see if it even works? Basically, we need to check if consumer is lagging or are enough records being produced? I i think  it would mostly be the former because as Yash said, we are anyways waiting for 100 records to be committed. It's not an ideal fix but let's first see if it works and if needed we can dig deeper.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscribe@kafka.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org