You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@commons.apache.org by Georgi Stoyanov <g....@draftkings.com> on 2021/09/02 12:16:18 UTC

JMH benchmark result analyze

Hi guys,
Recently we started to make our own jmh benchmarks of some parts of our code. To analyze the results I created an enum with hardcoded values from the first run in Jenkins. Then on each merge into master I'm comparing the results from the benchmark against those values and mark the build as failing if the benchmarks are way too much changed (ofc there's some deviation - 15%). Anyway, this approach doesn't look very good to me since I watched Aleksey presentation about the nanotime. Is there another way to approach the results? I'm not sure this is related, but we are running the tests inside the mvn test phase with junit/scalatest and that's how fail the build.


Kind Regards,
Georgi Stoyanov