You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2021/01/11 11:14:41 UTC

[GitHub] [flink-benchmarks] Thesharing edited a comment on pull request #7: [FLINK-20612][runtime] Add benchmarks for scheduler

Thesharing edited a comment on pull request #7:
URL: https://github.com/apache/flink-benchmarks/pull/7#issuecomment-757882489


   Thank you so much for your review, Piotr. 
   
   1. Totally agreed with point 1. It's important to maintain the benchmark. And we think it's necessary to monitor whether there is any regression in the performance of the scheduler, especially when there are changes related to it. 
   2. Yes, the names are long-winded and fulfilled with duplicated words. It seems that the batch and streaming modes in `buildTopology` and `scheduling` can be easily merged. But for the batch and streaming modes in `deploying` and `failover`, they have different `Setup` methods, and it seems a bit of tricky to merge them. I'm still working on this.
   3. We are trying to find a balance between accuracy and efficiency. If we decrease the number of iterations, the bias grows. Maybe we could set the iteration number to be 6 or less. 
   4. Agreed with this. The scheduler benchmark should run separately and not effect other benchmarks. 
   
   Thank you for the great suggestions. I'll improve my codes according these. Would you mind if I reach out to you for how to configure the jenkins after the pull request is well prepared? I think it would be better to deploy the correct code.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org