You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by gi...@apache.org on 2022/09/17 16:15:53 UTC

[beam] branch asf-site updated: Publishing website 2022/09/17 16:15:45 at commit 8754cc0

This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/beam.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new f298d0fde77 Publishing website 2022/09/17 16:15:45 at commit 8754cc0
f298d0fde77 is described below

commit f298d0fde77d6ab4f31873e385c7f5ad17eab839
Author: jenkins <bu...@apache.org>
AuthorDate: Sat Sep 17 16:15:46 2022 +0000

    Publishing website 2022/09/17 16:15:45 at commit 8754cc0
---
 .../documentation/sdks/python-machine-learning/index.html           | 6 ++++--
 website/generated-content/sitemap.xml                               | 2 +-
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/website/generated-content/documentation/sdks/python-machine-learning/index.html b/website/generated-content/documentation/sdks/python-machine-learning/index.html
index 9c80b0f81c8..4f10fc29548 100644
--- a/website/generated-content/documentation/sdks/python-machine-learning/index.html
+++ b/website/generated-content/documentation/sdks/python-machine-learning/index.html
@@ -19,7 +19,7 @@
 function addPlaceholder(){$('input:text').attr('placeholder',"What are you looking for?");}
 function endSearch(){var search=document.querySelector(".searchBar");search.classList.add("disappear");var icons=document.querySelector("#iconsBar");icons.classList.remove("disappear");}
 function blockScroll(){$("body").toggleClass("fixedPosition");}
-function openMenu(){addPlaceholder();blockScroll();}</script><div class="clearfix container-main-content"><div class="section-nav closed" data-offset-top=90 data-offset-bottom=500><span class="section-nav-back glyphicon glyphicon-menu-left"></span><nav><ul class=section-nav-list data-section-nav><li><span class=section-nav-list-main-title>Languages</span></li><li><span class=section-nav-list-title>Java</span><ul class=section-nav-list><li><a href=/documentation/sdks/java/>Java SDK overvi [...]
+function openMenu(){addPlaceholder();blockScroll();}</script><div class="clearfix container-main-content"><div class="section-nav closed" data-offset-top=90 data-offset-bottom=500><span class="section-nav-back glyphicon glyphicon-menu-left"></span><nav><ul class=section-nav-list data-section-nav><li><span class=section-nav-list-main-title>Languages</span></li><li><span class=section-nav-list-title>Java</span><ul class=section-nav-list><li><a href=/documentation/sdks/java/>Java SDK overvi [...]
 Pydoc</a></td></table><p><br><br><br></p><p>You can use Apache Beam with the RunInference API to use machine learning (ML) models to do local and remote inference with batch and streaming pipelines. Starting with Apache Beam 2.40.0, PyTorch and Scikit-learn frameworks are supported. You can create multiple types of transforms using the RunInference API: the API takes multiple types of setup parameters from model handlers, and the parameter type determines the model implementation.</p><h2 [...]
 <a href=https://github.com/apache/beam/blob/master/sdks/python/apache_beam/utils/shared.py#L20><code>Shared</code> class documentation</a>.</p><h3 id=multi-model-pipelines>Multi-model pipelines</h3><p>The RunInference API can be composed into multi-model pipelines. Multi-model pipelines can be useful for A/B testing or for building out ensembles made up of models that perform tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, language detection, corefer [...]
 with pipeline as p:
@@ -31,7 +31,9 @@ from apache_beam.ml.inference.pytorch_inference import PytorchModelHandlerTensor
 from apache_beam.ml.inference.pytorch_inference import PytorchModelHandlerKeyedTensor
 </code></pre><h3 id=use-pre-trained-models>Use pre-trained models</h3><p>The section provides requirements for using pre-trained models with PyTorch and Scikit-learn</p><h4 id=pytorch>PyTorch</h4><p>You need to provide a path to a file that contains the model&rsquo;s saved weights. This path must be accessible by the pipeline. To use pre-trained models with the RunInference API and the PyTorch framework, complete the following steps:</p><ol><li>Download the pre-trained weights and host t [...]
 <code>model_uri=&lt;path_to_pickled_file></code> and <code>model_file_type: &lt;ModelFileType></code>, where you can specify
-<code>ModelFileType.PICKLE</code> or <code>ModelFileType.JOBLIB</code>, depending on how the model was serialized.</li></ol><h3 id=use-multiple-models>Use multiple models</h3><p>You can also use the RunInference transform to add multiple inference models to your pipeline.</p><h4 id=ab-pattern>A/B Pattern</h4><pre><code>with pipeline as p:
+<code>ModelFileType.PICKLE</code> or <code>ModelFileType.JOBLIB</code>, depending on how the model was serialized.</li></ol><h3 id=use-custom-models>Use custom models</h3><p>If you would like to use a model that isn&rsquo;t specified by one of the supported frameworks, the RunInference API is designed flexibly to allow you to use any custom machine learning models.
+You only need to create your own <code>ModelHandler</code> or <code>KeyedModelHandler</code> with logic to load your model and use it to run the inference.</p><p>A simple example can be found in <a href=https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_custom_inference.ipynb>this notebook</a>.
+The <code>load_model</code> method shows how to load the model using a popular <code>spaCy</code> package while <code>run_inference</code> shows how to run the inference on a batch of examples.</p><h3 id=use-multiple-models>Use multiple models</h3><p>You can also use the RunInference transform to add multiple inference models to your pipeline.</p><h4 id=ab-pattern>A/B Pattern</h4><pre><code>with pipeline as p:
    data = p | 'Read' &gt;&gt; beam.ReadFromSource('a_source')
    model_a_predictions = data | RunInference(&lt;model_handler_A&gt;)
    model_b_predictions = data | RunInference(&lt;model_handler_B&gt;)
diff --git a/website/generated-content/sitemap.xml b/website/generated-content/sitemap.xml
index d1a211c09fa..d684b4eca34 100644
--- a/website/generated-content/sitemap.xml
+++ b/website/generated-content/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml"><url><loc>/blog/beam-2.41.0/</loc><lastmod>2022-08-23T21:36:06+00:00</lastmod></url><url><loc>/categories/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/categories/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/catego [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml"><url><loc>/blog/beam-2.41.0/</loc><lastmod>2022-08-23T21:36:06+00:00</lastmod></url><url><loc>/categories/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/categories/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/catego [...]
\ No newline at end of file