You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by "alamb (via GitHub)" <gi...@apache.org> on 2023/03/27 20:06:03 UTC

[GitHub] [arrow-datafusion] alamb commented on a diff in pull request #5655: Add compare.py to compare the output of multiple benchmarks

alamb commented on code in PR #5655:
URL: https://github.com/apache/arrow-datafusion/pull/5655#discussion_r1149738050


##########
benchmarks/README.md:
##########
@@ -76,12 +76,54 @@ cargo run --release --bin tpch -- convert --input ./data --output /mnt/tpch-parq
 
 Or if you want to verify and run all the queries in the benchmark, you can just run `cargo test`.
 
-### Machine readable benchmark summary
+### Comparing results between runs
 
 Any `tpch` execution with `-o <dir>` argument will produce a summary file right under the `<dir>`
 directory. It is a JSON serialized form of all the runs that happened as well as the runtime metadata
 (number of cores, DataFusion version, etc.).
 
+```shell
+$ git checkout main
+# generate an output script in /tmp/output_main
+$ cargo run --release --bin tpch -- benchmark datafusion --iterations 5 --path /data --format parquet -o /tmp/output_main
+# generate an output script in /tmp/output_branch
+$ git checkout my_branch
+$ cargo run --release --bin tpch -- benchmark datafusion --iterations 5 --path /data --format parquet -o /tmp/output_my_branch

Review Comment:
   Thank you for these suggestions, I have made them in dc5099da3



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org