You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2022/09/17 00:04:35 UTC

[GitHub] [beam] liferoad commented on a diff in pull request #23218: updated the pydoc for running a custom model on Beam

liferoad commented on code in PR #23218:
URL: https://github.com/apache/beam/pull/23218#discussion_r973510522


##########
website/www/site/content/en/documentation/sdks/python-machine-learning.md:
##########
@@ -83,6 +83,14 @@ You need to provide a path to a file that contains the pickled Scikit-learn mode
    `model_uri=<path_to_pickled_file>` and `model_file_type: <ModelFileType>`, where you can specify
    `ModelFileType.PICKLE` or `ModelFileType.JOBLIB`, depending on how the model was serialized.
 
+### Use custom models
+
+In fact, the RunInference API is designed flexibly to allow you to use any custom machine learning models. You only need to create your own `ModelHandler` or `KeyedModelHandler` to handle how the ML models are loaded from a location that the pipeline can access and how to use these models to run the inference. 

Review Comment:
   looks good. Thanks for rewording these.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org