You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by gi...@apache.org on 2022/09/03 04:17:14 UTC

[beam] branch asf-site updated: Publishing website 2022/09/03 04:17:08 at commit 31561e2

This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/beam.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 10544de8fb3 Publishing website 2022/09/03 04:17:08 at commit 31561e2
10544de8fb3 is described below

commit 10544de8fb39f281a967a3e14ed2bbf1442fcd34
Author: jenkins <bu...@apache.org>
AuthorDate: Sat Sep 3 04:17:09 2022 +0000

    Publishing website 2022/09/03 04:17:08 at commit 31561e2
---
 .../sdks/python-machine-learning/index.html        | 30 ++++++++++++++++++----
 website/generated-content/sitemap.xml              |  2 +-
 2 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/website/generated-content/documentation/sdks/python-machine-learning/index.html b/website/generated-content/documentation/sdks/python-machine-learning/index.html
index 0564deb78db..bdbb551102a 100644
--- a/website/generated-content/documentation/sdks/python-machine-learning/index.html
+++ b/website/generated-content/documentation/sdks/python-machine-learning/index.html
@@ -19,9 +19,9 @@
 function addPlaceholder(){$('input:text').attr('placeholder',"What are you looking for?");}
 function endSearch(){var search=document.querySelector(".searchBar");search.classList.add("disappear");var icons=document.querySelector("#iconsBar");icons.classList.remove("disappear");}
 function blockScroll(){$("body").toggleClass("fixedPosition");}
-function openMenu(){addPlaceholder();blockScroll();}</script><div class="clearfix container-main-content"><div class="section-nav closed" data-offset-top=90 data-offset-bottom=500><span class="section-nav-back glyphicon glyphicon-menu-left"></span><nav><ul class=section-nav-list data-section-nav><li><span class=section-nav-list-main-title>Languages</span></li><li><span class=section-nav-list-title>Java</span><ul class=section-nav-list><li><a href=/documentation/sdks/java/>Java SDK overvi [...]
-Pydoc</a></td></table><p><br><br><br></p><p>You can use Apache Beam with the RunInference API to use machine learning (ML) models to do local and remote inference with batch and streaming pipelines. Starting with Apache Beam 2.40.0, PyTorch and Scikit-learn frameworks are supported. You can create multiple types of transforms using the RunInference API: the API takes multiple types of setup parameters from model handlers, and the parameter type determines the model implementation.</p><h2 [...]
-<a href=https://github.com/apache/beam/blob/master/sdks/python/apache_beam/utils/shared.py#L20><code>Shared</code> class documentation</a>.</p><h3 id=multi-model-pipelines>Multi-model pipelines</h3><p>The RunInference API can be composed into multi-model pipelines. Multi-model pipelines can be useful for A/B testing or for building out ensembles that are comprised of models that perform tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, language detecti [...]
+function openMenu(){addPlaceholder();blockScroll();}</script><div class="clearfix container-main-content"><div class="section-nav closed" data-offset-top=90 data-offset-bottom=500><span class="section-nav-back glyphicon glyphicon-menu-left"></span><nav><ul class=section-nav-list data-section-nav><li><span class=section-nav-list-main-title>Languages</span></li><li><span class=section-nav-list-title>Java</span><ul class=section-nav-list><li><a href=/documentation/sdks/java/>Java SDK overvi [...]
+Pydoc</a></td></table><p><br><br><br></p><p>You can use Apache Beam with the RunInference API to use machine learning (ML) models to do local and remote inference with batch and streaming pipelines. Starting with Apache Beam 2.40.0, PyTorch and Scikit-learn frameworks are supported. You can create multiple types of transforms using the RunInference API: the API takes multiple types of setup parameters from model handlers, and the parameter type determines the model implementation.</p><h2 [...]
+<a href=https://github.com/apache/beam/blob/master/sdks/python/apache_beam/utils/shared.py#L20><code>Shared</code> class documentation</a>.</p><h3 id=multi-model-pipelines>Multi-model pipelines</h3><p>The RunInference API can be composed into multi-model pipelines. Multi-model pipelines can be useful for A/B testing or for building out ensembles made up of models that perform tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, language detection, corefer [...]
 with pipeline as p:
    predictions = ( p |  'Read' &gt;&gt; beam.ReadFromSource('a_source')
                      | 'RunInference' &gt;&gt; RunInference(&lt;model_handler&gt;)
@@ -66,8 +66,28 @@ with pipeline as p:
                 | 'ProcessOutput' &gt;&gt; beam.ParDo(PostProcessor()))
 </code></pre><p>If you need to use this object explicitly, include the following line in your pipeline to import the object:</p><pre><code>from apache_beam.ml.inference.base import PredictionResult
 </code></pre><p>For more information, see the <a href=https://github.com/apache/beam/blob/master/sdks/python/apache_beam/ml/inference/base.py#L65><code>PredictionResult</code> documentation</a>.</p><h2 id=run-a-machine-learning-pipeline>Run a machine learning pipeline</h2><p>For detailed instructions explaining how to build and run a pipeline that uses ML models, see the
-<a href=https://github.com/apache/beam/tree/master/sdks/python/apache_beam/examples/inference>Example RunInference API pipelines</a> on GitHub.</p><h2 id=beam-java-sdk-support>Beam Java SDK support</h2><p>RunInference API is available to Beam Java SDK 2.41.0 and later through Apache Beam&rsquo;s <a href=https://beam.apache.org/documentation/programming-guide/#multi-language-pipelines>Multi-language Pipelines framework</a>. Please see <a href=https://github.com/apache/beam/blob/master/sdk [...]
-<a href=/documentation/sdks/python-machine-learning/#batchelements-ptransform>BatchElements PTransforms</a>. For an example, see our <a href=https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/inference/pytorch_language_modeling.py>language modeling example</a>.</p><h2 id=related-links>Related links</h2><ul><li><a href=/documentation/transforms/python/elementwise/runinference>RunInference transforms</a></li><li><a href=https://github.com/apache/beam/tree/master/sd [...]
+<a href=https://github.com/apache/beam/tree/master/sdks/python/apache_beam/examples/inference>Example RunInference API pipelines</a> on GitHub.</p><h2 id=beam-java-sdk-support>Beam Java SDK support</h2><p>The RunInference API is available with the Beam Java SDK versions 2.41.0 and later through Apache Beam&rsquo;s <a href=https://beam.apache.org/documentation/programming-guide/#multi-language-pipelines>Multi-language Pipelines framework</a>. For information about the Java wrapper transfo [...]
+from apache_beam.ml.inference.base import RunInference
+from tensorflow_serving.apis import prediction_log_pb2
+from tfx_bsl.public.proto import model_spec_pb2
+from tfx_bsl.public.tfxio import TFExampleRecord
+from tfx_bsl.public.beam.run_inference import CreateModelHandler
+
+pipeline = beam.Pipeline()
+tfexample_beam_record = TFExampleRecord(file_pattern='/path/to/examples')
+saved_model_spec = model_spec_pb2.SavedModelSpec(model_path='/path/to/model')
+inference_spec_type = model_spec_pb2.InferenceSpecType(saved_model_spec=saved_model_spec)
+model_handler = CreateModelHandler(inference_spec_type)
+with pipeline as p:
+    _ = (p | tfexample_beam_record.RawRecordBeamSource()
+           | RunInference(model_handler)
+           | beam.Map(print)
+        )
+</code></pre><p>The model handler that is created with <code>CreateModelHander()</code> is always unkeyed. To make a keyed model handler, wrap the unkeyed model handler in the keyed model handler, which would then take the <code>tfx-bsl</code> model handler as a parameter. For example:</p><pre><code>from apache_beam.ml.inference.base import RunInference
+from apache_beam.ml.inference.base import KeyedModelHandler
+RunInference(KeyedModelHandler(tf_handler))
+</code></pre><p>If you are unsure if your data is keyed, you can also use <code>MaybeKeyedModelHandler</code>.</p><p>For more information, see <a href=https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.base.html#apache_beam.ml.inference.base.KeyedModelHandler><code>KeyedModelHander</code></a>.</p><h2 id=troubleshooting>Troubleshooting</h2><p>If you run into problems with your pipeline or job, this section lists issues that you might encounter and provides suggestions [...]
+<a href=/documentation/sdks/python-machine-learning/#batchelements-ptransform>BatchElements PTransforms</a>. For an example, see our <a href=https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/inference/pytorch_language_modeling.py>language modeling example</a>.</p><h2 id=related-links>Related links</h2><ul><li><a href=/documentation/transforms/python/elementwise/runinference>RunInference transforms</a></li><li><a href=https://github.com/apache/beam/tree/master/sd [...]
 Pydoc</a></td></table><p><br><br><br></p></div></div><footer class=footer><div class=footer__contained><div class=footer__cols><div class="footer__cols__col footer__cols__col__logos"><div class=footer__cols__col__logo><img src=/images/beam_logo_circle.svg class=footer__logo alt="Beam logo"></div><div class=footer__cols__col__logo><img src=/images/apache_logo_circle.svg class=footer__logo alt="Apache logo"></div></div><div class=footer-wrapper><div class=wrapper-grid><div class=footer__co [...]
 <a href=http://www.apache.org>The Apache Software Foundation</a>
 | <a href=/privacy_policy>Privacy Policy</a>
diff --git a/website/generated-content/sitemap.xml b/website/generated-content/sitemap.xml
index 75a4d33f216..ebc1b066546 100644
--- a/website/generated-content/sitemap.xml
+++ b/website/generated-content/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml"><url><loc>/blog/beam-2.41.0/</loc><lastmod>2022-08-23T21:36:06+00:00</lastmod></url><url><loc>/categories/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/categories/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/catego [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml"><url><loc>/blog/beam-2.41.0/</loc><lastmod>2022-08-23T21:36:06+00:00</lastmod></url><url><loc>/categories/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/categories/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/catego [...]
\ No newline at end of file