You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by da...@apache.org on 2022/09/17 12:12:44 UTC

[beam] branch master updated: updated the pydoc for running a custom model on Beam (#23218)

This is an automated email from the ASF dual-hosted git repository.

damccorm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/beam.git


The following commit(s) were added to refs/heads/master by this push:
     new 8754cc09048 updated the pydoc for running a custom model on Beam (#23218)
8754cc09048 is described below

commit 8754cc0904872d37edbb8b4d3b8d9f92aad94acc
Author: liferoad <hu...@gmail.com>
AuthorDate: Sat Sep 17 08:12:35 2022 -0400

    updated the pydoc for running a custom model on Beam (#23218)
    
    * updated the pydoc for running a custom model on Beam
    
    * Update website/www/site/content/en/documentation/sdks/python-machine-learning.md
    
    Co-authored-by: Anand Inguva <34...@users.noreply.github.com>
    
    * Update website/www/site/content/en/documentation/sdks/python-machine-learning.md
    
    Co-authored-by: Danny McCormick <da...@google.com>
    
    Co-authored-by: XQ Hu <xq...@google.com>
    Co-authored-by: Anand Inguva <34...@users.noreply.github.com>
    Co-authored-by: Danny McCormick <da...@google.com>
---
 examples/notebooks/beam-ml/run_custom_inference.ipynb            | 7 ++++++-
 .../content/en/documentation/sdks/python-machine-learning.md     | 9 +++++++++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/examples/notebooks/beam-ml/run_custom_inference.ipynb b/examples/notebooks/beam-ml/run_custom_inference.ipynb
index f08ccaf3426..b2fa0aced49 100644
--- a/examples/notebooks/beam-ml/run_custom_inference.ipynb
+++ b/examples/notebooks/beam-ml/run_custom_inference.ipynb
@@ -37,7 +37,7 @@
         "id": "b6f8f3af-744e-4eaa-8a30-6d03e8e4d21e"
       },
       "source": [
-        "# Bring your own Machine Leanring (ML) model to Beam RunInference\n",
+        "# Bring your own Machine Learning (ML) model to Beam RunInference\n",
         "\n",
         "<button>\n",
         "  <a href=\"https://beam.apache.org/documentation/sdks/python-machine-learning/\">\n",
@@ -549,6 +549,11 @@
       "provenance": [],
       "toc_visible": true
     },
+    "kernelspec": {
+      "display_name": "Python 3.9.13 ('venv': venv)",
+      "language": "python",
+      "name": "python3"
+    },
     "language_info": {
       "codemirror_mode": {
         "name": "ipython",
diff --git a/website/www/site/content/en/documentation/sdks/python-machine-learning.md b/website/www/site/content/en/documentation/sdks/python-machine-learning.md
index d34ddb99a57..cce9853990e 100644
--- a/website/www/site/content/en/documentation/sdks/python-machine-learning.md
+++ b/website/www/site/content/en/documentation/sdks/python-machine-learning.md
@@ -83,6 +83,15 @@ You need to provide a path to a file that contains the pickled Scikit-learn mode
    `model_uri=<path_to_pickled_file>` and `model_file_type: <ModelFileType>`, where you can specify
    `ModelFileType.PICKLE` or `ModelFileType.JOBLIB`, depending on how the model was serialized.
 
+### Use custom models
+
+If you would like to use a model that isn't specified by one of the supported frameworks, the RunInference API is designed flexibly to allow you to use any custom machine learning models.
+You only need to create your own `ModelHandler` or `KeyedModelHandler` with logic to load your model and use it to run the inference.
+
+A simple example can be found in [this notebook](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_custom_inference.ipynb).
+The `load_model` method shows how to load the model using a popular `spaCy` package while `run_inference` shows how to run the inference on a batch of examples.
+
+
 ### Use multiple models
 
 You can also use the RunInference transform to add multiple inference models to your pipeline.