You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by "damccorm (via GitHub)" <gi...@apache.org> on 2023/02/23 13:15:41 UTC

[GitHub] [beam] damccorm commented on a diff in pull request #25607: restructure ml overview website page

damccorm commented on code in PR #25607:
URL: https://github.com/apache/beam/pull/25607#discussion_r1115671215


##########
website/www/site/content/en/documentation/ml/overview.md:
##########
@@ -78,20 +80,28 @@ The RunInference API doesn't currently support making remote inference calls usi
 
 * Consider monitoring and measuring the performance of a pipeline when deploying, because monitoring can provide insight into the status and health of the application.
 
+## Model validation
+
+Model validation allows you to benchmark your model’s performance against an unseen dataset. You can extract chosen metrics, create visualizations, log metadata, and compare the performance of different models with the end goal of validating whether your model is ready to deploy. Beam provides support for running model evaluation on a TensorFlow model directly inside your pipeline.

Review Comment:
   ```suggestion
   Model validation allows you to benchmark your model’s performance against a previously unseen dataset. You can extract chosen metrics, create visualizations, log metadata, and compare the performance of different models with the end goal of validating whether your model is ready to deploy. Beam provides support for running model evaluation on a TensorFlow model directly inside your pipeline.
   ```
   
   Small wording nit



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org