You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2022/11/29 19:59:46 UTC

[GitHub] [beam] damccorm commented on a diff in pull request #24347: Add Pytorch RunInference GPU benchmark

damccorm commented on code in PR #24347:
URL: https://github.com/apache/beam/pull/24347#discussion_r1035219325


##########
.test-infra/jenkins/LoadTestsBuilder.groovy:
##########
@@ -43,7 +43,8 @@ class LoadTestsBuilder {
 
 
   static void loadTest(context, String title, Runner runner, SDK sdk, Map<String, ?> options,
-      String mainClass, List<String> jobSpecificSwitches = null, String requirementsTxtFile = null) {
+      String mainClass, List<String> jobSpecificSwitches = null, String requirementsTxtFile = null,
+      String pythonVersion = null) {

Review Comment:
   I don't quite follow why we need these changes - could you explain them?



##########
sdks/python/apache_beam/testing/benchmarks/inference/README.md:
##########
@@ -62,12 +80,24 @@ the following metrics:
 - Mean Load Model Latency - the average amount of time it takes to load a model. This is done once per DoFn instance on worker
 startup, so the cost is amortized across the pipeline.
 
+These metrics are published to InfluxDB and BigQuery.
+
+<h3>Pytorch Language Modeling Tests</h3>
+
+* Pytorch Langauge Modeling using Hugging Face bert-base-uncased model.
+  * machine_type: n1-standard-2
+  * num_workers: 250
+  * autoscaling_algorithm: NONE
+  * disk_size_gb: 50
+
+* Pytorch Langauge Modeling using Hugging Face bert-large-uncased model.
+  * machine_type: n1-standard-2
+  * num_workers: 250
+  * autoscaling_algorithm: NONE
+  * disk_size_gb: 50
+
 Approximate size of the models used in the tests
 * bert-base-uncased: 417.7 MB
 * bert-large-uncased: 1.2 GB
 
-The above tests are configured to run using following configurations
- * machine_type: n1-standard-2
- * num_workers: 250
- * autoscaling_algorithm: NONE
- * disk_size_gb: 75
+All the performance tests are defined at [job_InferenceBenchmarkTests_Python.groovy .](https://github.com/apache/beam/blob/master/.test-infra/jenkins/job_InferenceBenchmarkTests_Python.groovy)

Review Comment:
   ```suggestion
   All the performance tests are defined at [job_InferenceBenchmarkTests_Python.groovy](https://github.com/apache/beam/blob/master/.test-infra/jenkins/job_InferenceBenchmarkTests_Python.groovy).
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org