You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by "damccorm (via GitHub)" <gi...@apache.org> on 2023/04/24 17:14:20 UTC

[GitHub] [beam] damccorm commented on a diff in pull request #26404: Add run_inference windowing notebook

damccorm commented on code in PR #26404:
URL: https://github.com/apache/beam/pull/26404#discussion_r1175570403


##########
examples/notebooks/beam-ml/run_inference_windowing.ipynb:
##########
@@ -0,0 +1,477 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": []
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "K2MpsIa-ncMZ"
+      },
+      "outputs": [],
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Apache Beam RunInference Windowing Example\n",
+        "\n",
+        "<table align=\"left\">\n",
+        "  <td>\n",
+        "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_windowing.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/colab_32px.png\" />Run in Google Colab</a>\n",
+        "  </td>\n",
+        "  <td>\n",
+        "    <a target=\"_blank\" href=\"https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_windowing.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/github_32px.png\" />View source on GitHub</a>\n",
+        "  </td>\n",
+        "</table>\n"
+      ],
+      "metadata": {
+        "id": "fKxfINuCPsh9"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "This notebook demonstrates the use of the RunInference transform together with windowing in a streaming pipeline. The pipeline predicts the quality of milk samples and classifies them as 'good', 'bad' or 'medium'. The predictions for each window are then aggregated. The pipeline makes use of the XGBoost model handler. For more information about the RunInference API, see the [Machine Learning section of the Apache Beam documentation](https://beam.apache.org/documentation/ml/overview/).\n",
+        "\n",
+        "With RunInference, a model handlers manages batching, vectorization, and prediction optimization for your XGBoost pipeline or model.\n",
+        "\n",
+        "This notebook demonstrates the following common RunInference patterns:\n",
+        "\n",
+        "- Generate predictions for all samples in a window\n",
+        "- Aggregate the results per window after RunInference\n",
+        "- Print the aggregations"
+      ],
+      "metadata": {
+        "id": "knGVsVR6P_nZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Complete the following setup steps:\n",
+        "- Install dependencies for Apache Beam.\n",
+        "- Install XGBoost.\n",
+        "- Download the [Milk Quality Dataset from Kaggle](https://www.kaggle.com/datasets/cpluzshrijayan/milkquality) and put in the current directory with the name `milk_quality.csv` (The dataset should be in csv format)"
+      ],
+      "metadata": {
+        "id": "s5PPNo9HRRe1"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "!pip install git+https://github.com/apache/beam.git\n",

Review Comment:
   Looks like this line isn't working. Since Beam 2.47 is almost ready to release with the XGBoost changes, I'd probably rather just wait on that and update at the time. We can get this notebook fully approved/ready to go now though so that I just need to hit merge when the time comes



##########
examples/notebooks/beam-ml/run_inference_windowing.ipynb:
##########
@@ -0,0 +1,477 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": []
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "K2MpsIa-ncMZ"
+      },
+      "outputs": [],
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Apache Beam RunInference Windowing Example\n",
+        "\n",
+        "<table align=\"left\">\n",
+        "  <td>\n",
+        "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_windowing.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/colab_32px.png\" />Run in Google Colab</a>\n",
+        "  </td>\n",
+        "  <td>\n",
+        "    <a target=\"_blank\" href=\"https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_windowing.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/github_32px.png\" />View source on GitHub</a>\n",
+        "  </td>\n",
+        "</table>\n"
+      ],
+      "metadata": {
+        "id": "fKxfINuCPsh9"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "This notebook demonstrates the use of the RunInference transform together with windowing in a streaming pipeline. The pipeline predicts the quality of milk samples and classifies them as 'good', 'bad' or 'medium'. The predictions for each window are then aggregated. The pipeline makes use of the XGBoost model handler. For more information about the RunInference API, see the [Machine Learning section of the Apache Beam documentation](https://beam.apache.org/documentation/ml/overview/).\n",
+        "\n",
+        "With RunInference, a model handlers manages batching, vectorization, and prediction optimization for your XGBoost pipeline or model.\n",
+        "\n",
+        "This notebook demonstrates the following common RunInference patterns:\n",
+        "\n",
+        "- Generate predictions for all samples in a window\n",
+        "- Aggregate the results per window after RunInference\n",
+        "- Print the aggregations"
+      ],
+      "metadata": {
+        "id": "knGVsVR6P_nZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Complete the following setup steps:\n",
+        "- Install dependencies for Apache Beam.\n",
+        "- Install XGBoost.\n",
+        "- Download the [Milk Quality Dataset from Kaggle](https://www.kaggle.com/datasets/cpluzshrijayan/milkquality) and put in the current directory with the name `milk_quality.csv` (The dataset should be in csv format)"
+      ],
+      "metadata": {
+        "id": "s5PPNo9HRRe1"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "!pip install git+https://github.com/apache/beam.git\n",
+        "!pip install xgboost"
+      ],
+      "metadata": {
+        "colab": {
+          "base_uri": "https://localhost:8080/"
+        },
+        "id": "YiPD9-j_RRNC",
+        "outputId": "757726e3-ab3a-4544-fd9d-625df4e526ec"
+      },
+      "execution_count": null,
+      "outputs": [
+        {
+          "output_type": "stream",
+          "name": "stdout",
+          "text": [
+            "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
+            "Collecting git+https://github.com/apache/beam.git\n",
+            "  Cloning https://github.com/apache/beam.git to /tmp/pip-req-build-4rrh40ti\n",
+            "  Running command git clone --filter=blob:none --quiet https://github.com/apache/beam.git /tmp/pip-req-build-4rrh40ti\n",
+            "  Resolved https://github.com/apache/beam.git to commit 47f1123f03f52744b951a8b6fa067b32e66112bf\n",
+            "  Running command git submodule update --init --recursive -q\n",
+            "\u001b[31mERROR: git+https://github.com/apache/beam.git does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.\u001b[0m\u001b[31m\n",
+            "\u001b[0m"
+          ]
+        }
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## About the dataset\n",
+        "\n",
+        "This dataset comes in the form of a csv file that consists of 7 columns: pH, temperature, taste, odor, fat, turbidity, and color. The dataset also contains a column that labels the quality of that sample as `good`, `bad` or `medium`."
+      ],
+      "metadata": {
+        "id": "Uz9BcQg_Qbva"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "import argparse\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import NamedTuple\n",
+        "\n",
+        "import pandas\n",
+        "from sklearn.model_selection import train_test_split\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "import xgboost\n",
+        "from apache_beam import window\n",
+        "from apache_beam.ml.inference import RunInference\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.xgboost_inference import XGBoostModelHandlerPandas\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.runners.runner import PipelineResult\n",
+        "from apache_beam.testing.test_stream import TestStream"
+      ],
+      "metadata": {
+        "colab": {
+          "base_uri": "https://localhost:8080/",
+          "height": 383
+        },
+        "id": "sHDrJ1nTPqUv",
+        "outputId": "fe518b5f-bce6-4a9d-a8c8-43720ebc2638"
+      },
+      "execution_count": null,
+      "outputs": [
+        {
+          "output_type": "error",
+          "ename": "ModuleNotFoundError",
+          "evalue": "ignored",
+          "traceback": [
+            "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+            "\u001b[0;31mModuleNotFoundError\u001b[0m                       Traceback (most recent call last)",
+            "\u001b[0;32m<ipython-input-3-d037cd1bd088>\u001b[0m in \u001b[0;36m<cell line: 2>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mxgboost\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32mimport\u001b[0m \u001b[0mapache_beam\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mbeam\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m      3\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      4\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mapache_beam\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mml\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minference\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mRunInference\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      5\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mapache_beam\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mml\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minference\u001b[0m\u001b[0;34m.\u001b[0m
 \u001b[0mxgboost_inference\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mXGBoostModelHandlerNumpy\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+            "\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'apache_beam'",
+            "",
+            "\u001b[0;31m---------------------------------------------------------------------------\u001b[0;32m\nNOTE: If your import is failing due to a missing package, you can\nmanually install dependencies using either !pip or !apt.\n\nTo view examples of installing some common dependencies, click the\n\"Open Examples\" button below.\n\u001b[0;31m---------------------------------------------------------------------------\u001b[0m\n"
+          ],
+          "errorDetails": {
+            "actions": [
+              {
+                "action": "open_url",
+                "actionText": "Open Examples",
+                "url": "/notebooks/snippets/importing_libraries.ipynb"
+              }
+            ]
+          }
+        }
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Load the Kaggle dataset and train XGBoost model\n",
+        "This section demonstrates the following steps:\n",
+        "1. Load the Milk Quality dataset from Kaggle.\n",
+        "2. Split the data in a training and test set.\n",
+        "2. Train the XGBoost classifier to predict the quality of milk.\n",
+        "3. Save the model in a JSON file using `mode.save_model`. (https://xgboost.readthedocs.io/en/stable/tutorials/saving_model.html)\n"
+      ],
+      "metadata": {
+        "id": "kpXjNoVgRpOb"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "DATASET = \"dataset.csv\"\n",
+        "TRAINING_SET = \"training_set.csv\"\n",
+        "TEST_SET = \"test_set.csv\"\n",
+        "LABELS = \"labels.csv\"\n",
+        "MODEL_STATE = \"model.json\""
+      ],
+      "metadata": {
+        "id": "cnH5lTahY6Ty"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Preprocessing helper functions"
+      ],
+      "metadata": {
+        "id": "KNRkuUQ9aA62"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_data(\n",
+        "    dataset_path: str,\n",
+        "    training_set_path: str,\n",
+        "    labels_path: str,\n",
+        "    test_set_path: str):\n",
+        "  \"\"\"\n",
+        "    Helper function to split the dataset into a training set\n",
+        "    and its labels and a test set. The training set and\n",
+        "    its labels are used to train a lightweight model.\n",
+        "    The test set is used to create a test streaming pipeline.\n",
+        "    Args:\n",
+        "        dataset_path: path to csv file containing the Kaggle\n",
+        "         milk quality dataset\n",
+        "        training_set_path: path to output the training samples\n",
+        "        labels_path:  path to output the labels for the training set\n",
+        "        test_set_path: path to output the test samples\n",
+        "    \"\"\"\n",
+        "  df = pandas.read_csv(dataset_path)\n",
+        "  df['Grade'].replace(['low', 'medium', 'high'], [0, 1, 2], inplace=True)\n",
+        "  x = df.drop(columns=['Grade'])\n",
+        "  y = df['Grade']\n",
+        "  x_train, x_test, y_train, _ = \\\n",
+        "      train_test_split(x, y, test_size=0.60, random_state=99)\n",
+        "  x_train.to_csv(training_set_path, index=False)\n",
+        "  y_train.to_csv(labels_path, index=False)\n",
+        "  x_test.to_csv(test_set_path, index=False)\n",
+        "\n",
+        "\n",
+        "def train_model(\n",
+        "    samples_path: str, labels_path: str, model_state_output_path: str):\n",
+        "  \"\"\"Function to train the XGBoost model.\n",
+        "    Args:\n",
+        "      samples_path: path to csv file containing the training data\n",
+        "      labels_path: path to csv file containing the labels for the training data\n",
+        "      model_state_output_path: Path to store the trained model\n",
+        "  \"\"\"\n",
+        "  samples = pandas.read_csv(samples_path)\n",
+        "  labels = pandas.read_csv(labels_path)\n",
+        "  xgb = xgboost.XGBClassifier(max_depth=3)\n",
+        "  xgb.fit(samples, labels)\n",
+        "  xgb.save_model(model_state_output_path)\n",
+        "  return xgb"
+      ],
+      "metadata": {
+        "id": "MUUq_j6NXu41"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Preprocess the data and train the model\n",
+        "\n",
+        "We split the dataset in a training set, test set and labels for training set. We use the test set as input data for our test stream, you can use the test set to validate the trained model's performance. After preprocessing, we have 3 different files containing data. Once we have our training set, we can also train the XGBoost model and store it in a json file, such that it can be loaded by the ModelHandler."
+      ],
+      "metadata": {
+        "id": "h5NKrWvlaV4T"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "preprocess_data(\n",
+        "    dataset_path=DATASET,\n",
+        "    training_set_path=TRAINING_SET,\n",
+        "    labels_path=LABELS,\n",
+        "    test_set_path=TEST_SET)\n",
+        "\n",
+        "train_model(\n",
+        "    samples_path=TRAINING_SET,\n",
+        "    labels_path=LABELS,\n",
+        "    model_state_output_path=MODEL_STATE)"
+      ],
+      "metadata": {
+        "id": "QNLbrXfEYtZP"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Named tuple to store the number of good, bad and medium quality samples in a window\n",
+        "class MilkQualityAggregation(NamedTuple):\n",
+        "  bad_quality_measurements: int\n",
+        "  medium_quality_measurements: int\n",
+        "  high_quality_measurements: int"
+      ],
+      "metadata": {
+        "colab": {
+          "base_uri": "https://localhost:8080/",
+          "height": 240
+        },
+        "id": "1RcM3F66aqSU",
+        "outputId": "84618293-01a1-473f-9cd0-c8b4e88eb3f3"
+      },
+      "execution_count": null,
+      "outputs": [
+        {
+          "output_type": "error",
+          "ename": "NameError",
+          "evalue": "ignored",
+          "traceback": [
+            "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+            "\u001b[0;31mNameError\u001b[0m                                 Traceback (most recent call last)",
+            "\u001b[0;32m<ipython-input-4-3cabe047e753>\u001b[0m in \u001b[0;36m<cell line: 2>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[0;31m# Named tuple to store the number of good, bad and medium quality samples in a window\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32mclass\u001b[0m \u001b[0mMilkQualityAggregation\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mNamedTuple\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m      3\u001b[0m   \u001b[0mbad_quality_measurements\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mint\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      4\u001b[0m   \u001b[0mmedium_quality_measurements\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mint\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      5\u001b[0m   \u001b[0mhigh_quality_measurements\u001b[0m\u001b[0;34m:\u001
 b[0m \u001b[0mint\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+            "\u001b[0;31mNameError\u001b[0m: name 'NamedTuple' is not defined"
+          ]
+        }
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Helper CombineFn to aggregate the results of a window, the function keeps track of the number of good, bad and medium quality samples in the stream "
+      ],
+      "metadata": {
+        "id": "uIy51BJmjuJv"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class AggregateMilkQualityResults(beam.CombineFn):\n",
+        "  \"\"\"Simple aggregation to keep track of the number\n",
+        "   of samples with good, bad and medium quality milk.\"\"\"\n",
+        "  def create_accumulator(self):\n",
+        "    return MilkQualityAggregation(0, 0, 0)\n",
+        "\n",
+        "  def add_input(\n",
+        "      self, accumulator: MilkQualityAggregation, element: PredictionResult):\n",
+        "    quality = element.inference[0]\n",
+        "    if quality == 0:\n",
+        "      return MilkQualityAggregation(\n",
+        "          accumulator.bad_quality_measurements + 1,\n",
+        "          accumulator.medium_quality_measurements,\n",
+        "          accumulator.high_quality_measurements)\n",
+        "    elif quality == 1:\n",
+        "      return MilkQualityAggregation(\n",
+        "          accumulator.bad_quality_measurements,\n",
+        "          accumulator.medium_quality_measurements + 1,\n",
+        "          accumulator.high_quality_measurements)\n",
+        "    else:\n",
+        "      return MilkQualityAggregation(\n",
+        "          accumulator.bad_quality_measurements,\n",
+        "          accumulator.medium_quality_measurements,\n",
+        "          accumulator.high_quality_measurements + 1)\n",
+        "\n",
+        "  def merge_accumulators(self, accumulators: MilkQualityAggregation):\n",
+        "    return MilkQualityAggregation(\n",
+        "        sum(\n",
+        "            aggregation.bad_quality_measurements\n",
+        "            for aggregation in accumulators),\n",
+        "        sum(\n",
+        "            aggregation.medium_quality_measurements\n",
+        "            for aggregation in accumulators),\n",
+        "        sum(\n",
+        "            aggregation.high_quality_measurements\n",
+        "            for aggregation in accumulators),\n",
+        "    )\n",
+        "\n",
+        "  def extract_output(self, accumulator: MilkQualityAggregation):\n",
+        "    return accumulator"
+      ],
+      "metadata": {
+        "id": "Piezn0pfjs0E"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Create a streaming pipeline using the test data\n",
+        "\n",
+        "We construct a TestStream that contains all samples from the test set."

Review Comment:
   Could you add a brief explanation of what a TestStream is?



##########
examples/notebooks/beam-ml/run_inference_windowing.ipynb:
##########
@@ -0,0 +1,477 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": []
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "K2MpsIa-ncMZ"
+      },
+      "outputs": [],
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Apache Beam RunInference Windowing Example\n",
+        "\n",
+        "<table align=\"left\">\n",
+        "  <td>\n",
+        "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_windowing.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/colab_32px.png\" />Run in Google Colab</a>\n",
+        "  </td>\n",
+        "  <td>\n",
+        "    <a target=\"_blank\" href=\"https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_windowing.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/github_32px.png\" />View source on GitHub</a>\n",
+        "  </td>\n",
+        "</table>\n"
+      ],
+      "metadata": {
+        "id": "fKxfINuCPsh9"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "This notebook demonstrates the use of the RunInference transform together with windowing in a streaming pipeline. The pipeline predicts the quality of milk samples and classifies them as 'good', 'bad' or 'medium'. The predictions for each window are then aggregated. The pipeline makes use of the XGBoost model handler. For more information about the RunInference API, see the [Machine Learning section of the Apache Beam documentation](https://beam.apache.org/documentation/ml/overview/).\n",
+        "\n",
+        "With RunInference, a model handlers manages batching, vectorization, and prediction optimization for your XGBoost pipeline or model.\n",
+        "\n",
+        "This notebook demonstrates the following common RunInference patterns:\n",
+        "\n",
+        "- Generate predictions for all samples in a window\n",
+        "- Aggregate the results per window after RunInference\n",
+        "- Print the aggregations"
+      ],
+      "metadata": {
+        "id": "knGVsVR6P_nZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Complete the following setup steps:\n",
+        "- Install dependencies for Apache Beam.\n",
+        "- Install XGBoost.\n",
+        "- Download the [Milk Quality Dataset from Kaggle](https://www.kaggle.com/datasets/cpluzshrijayan/milkquality) and put in the current directory with the name `milk_quality.csv` (The dataset should be in csv format)"
+      ],
+      "metadata": {
+        "id": "s5PPNo9HRRe1"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "!pip install git+https://github.com/apache/beam.git\n",
+        "!pip install xgboost"
+      ],
+      "metadata": {
+        "colab": {
+          "base_uri": "https://localhost:8080/"
+        },
+        "id": "YiPD9-j_RRNC",
+        "outputId": "757726e3-ab3a-4544-fd9d-625df4e526ec"
+      },
+      "execution_count": null,
+      "outputs": [
+        {
+          "output_type": "stream",
+          "name": "stdout",
+          "text": [
+            "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
+            "Collecting git+https://github.com/apache/beam.git\n",
+            "  Cloning https://github.com/apache/beam.git to /tmp/pip-req-build-4rrh40ti\n",
+            "  Running command git clone --filter=blob:none --quiet https://github.com/apache/beam.git /tmp/pip-req-build-4rrh40ti\n",
+            "  Resolved https://github.com/apache/beam.git to commit 47f1123f03f52744b951a8b6fa067b32e66112bf\n",
+            "  Running command git submodule update --init --recursive -q\n",
+            "\u001b[31mERROR: git+https://github.com/apache/beam.git does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.\u001b[0m\u001b[31m\n",
+            "\u001b[0m"
+          ]
+        }
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## About the dataset\n",
+        "\n",
+        "This dataset comes in the form of a csv file that consists of 7 columns: pH, temperature, taste, odor, fat, turbidity, and color. The dataset also contains a column that labels the quality of that sample as `good`, `bad` or `medium`."
+      ],
+      "metadata": {
+        "id": "Uz9BcQg_Qbva"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "import argparse\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import NamedTuple\n",
+        "\n",
+        "import pandas\n",
+        "from sklearn.model_selection import train_test_split\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "import xgboost\n",
+        "from apache_beam import window\n",
+        "from apache_beam.ml.inference import RunInference\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.xgboost_inference import XGBoostModelHandlerPandas\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.runners.runner import PipelineResult\n",
+        "from apache_beam.testing.test_stream import TestStream"
+      ],
+      "metadata": {
+        "colab": {
+          "base_uri": "https://localhost:8080/",
+          "height": 383
+        },
+        "id": "sHDrJ1nTPqUv",
+        "outputId": "fe518b5f-bce6-4a9d-a8c8-43720ebc2638"
+      },
+      "execution_count": null,
+      "outputs": [
+        {
+          "output_type": "error",
+          "ename": "ModuleNotFoundError",
+          "evalue": "ignored",
+          "traceback": [
+            "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+            "\u001b[0;31mModuleNotFoundError\u001b[0m                       Traceback (most recent call last)",
+            "\u001b[0;32m<ipython-input-3-d037cd1bd088>\u001b[0m in \u001b[0;36m<cell line: 2>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mxgboost\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32mimport\u001b[0m \u001b[0mapache_beam\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mbeam\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m      3\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      4\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mapache_beam\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mml\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minference\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mRunInference\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      5\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mapache_beam\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mml\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minference\u001b[0m\u001b[0;34m.\u001b[0m
 \u001b[0mxgboost_inference\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mXGBoostModelHandlerNumpy\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+            "\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'apache_beam'",
+            "",
+            "\u001b[0;31m---------------------------------------------------------------------------\u001b[0;32m\nNOTE: If your import is failing due to a missing package, you can\nmanually install dependencies using either !pip or !apt.\n\nTo view examples of installing some common dependencies, click the\n\"Open Examples\" button below.\n\u001b[0;31m---------------------------------------------------------------------------\u001b[0m\n"
+          ],
+          "errorDetails": {
+            "actions": [
+              {
+                "action": "open_url",
+                "actionText": "Open Examples",
+                "url": "/notebooks/snippets/importing_libraries.ipynb"
+              }
+            ]
+          }
+        }
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Load the Kaggle dataset and train XGBoost model\n",
+        "This section demonstrates the following steps:\n",
+        "1. Load the Milk Quality dataset from Kaggle.\n",
+        "2. Split the data in a training and test set.\n",
+        "2. Train the XGBoost classifier to predict the quality of milk.\n",
+        "3. Save the model in a JSON file using `mode.save_model`. (https://xgboost.readthedocs.io/en/stable/tutorials/saving_model.html)\n"
+      ],
+      "metadata": {
+        "id": "kpXjNoVgRpOb"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "DATASET = \"dataset.csv\"\n",
+        "TRAINING_SET = \"training_set.csv\"\n",
+        "TEST_SET = \"test_set.csv\"\n",
+        "LABELS = \"labels.csv\"\n",
+        "MODEL_STATE = \"model.json\""
+      ],
+      "metadata": {
+        "id": "cnH5lTahY6Ty"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Preprocessing helper functions"
+      ],
+      "metadata": {
+        "id": "KNRkuUQ9aA62"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_data(\n",
+        "    dataset_path: str,\n",
+        "    training_set_path: str,\n",
+        "    labels_path: str,\n",
+        "    test_set_path: str):\n",
+        "  \"\"\"\n",
+        "    Helper function to split the dataset into a training set\n",
+        "    and its labels and a test set. The training set and\n",
+        "    its labels are used to train a lightweight model.\n",
+        "    The test set is used to create a test streaming pipeline.\n",
+        "    Args:\n",
+        "        dataset_path: path to csv file containing the Kaggle\n",
+        "         milk quality dataset\n",
+        "        training_set_path: path to output the training samples\n",
+        "        labels_path:  path to output the labels for the training set\n",
+        "        test_set_path: path to output the test samples\n",
+        "    \"\"\"\n",
+        "  df = pandas.read_csv(dataset_path)\n",
+        "  df['Grade'].replace(['low', 'medium', 'high'], [0, 1, 2], inplace=True)\n",
+        "  x = df.drop(columns=['Grade'])\n",
+        "  y = df['Grade']\n",
+        "  x_train, x_test, y_train, _ = \\\n",
+        "      train_test_split(x, y, test_size=0.60, random_state=99)\n",
+        "  x_train.to_csv(training_set_path, index=False)\n",
+        "  y_train.to_csv(labels_path, index=False)\n",
+        "  x_test.to_csv(test_set_path, index=False)\n",
+        "\n",
+        "\n",
+        "def train_model(\n",
+        "    samples_path: str, labels_path: str, model_state_output_path: str):\n",
+        "  \"\"\"Function to train the XGBoost model.\n",
+        "    Args:\n",
+        "      samples_path: path to csv file containing the training data\n",
+        "      labels_path: path to csv file containing the labels for the training data\n",
+        "      model_state_output_path: Path to store the trained model\n",
+        "  \"\"\"\n",
+        "  samples = pandas.read_csv(samples_path)\n",
+        "  labels = pandas.read_csv(labels_path)\n",
+        "  xgb = xgboost.XGBClassifier(max_depth=3)\n",
+        "  xgb.fit(samples, labels)\n",
+        "  xgb.save_model(model_state_output_path)\n",
+        "  return xgb"
+      ],
+      "metadata": {
+        "id": "MUUq_j6NXu41"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Preprocess the data and train the model\n",
+        "\n",
+        "We split the dataset in a training set, test set and labels for training set. We use the test set as input data for our test stream, you can use the test set to validate the trained model's performance. After preprocessing, we have 3 different files containing data. Once we have our training set, we can also train the XGBoost model and store it in a json file, such that it can be loaded by the ModelHandler."
+      ],
+      "metadata": {
+        "id": "h5NKrWvlaV4T"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "preprocess_data(\n",
+        "    dataset_path=DATASET,\n",
+        "    training_set_path=TRAINING_SET,\n",
+        "    labels_path=LABELS,\n",
+        "    test_set_path=TEST_SET)\n",
+        "\n",
+        "train_model(\n",
+        "    samples_path=TRAINING_SET,\n",
+        "    labels_path=LABELS,\n",
+        "    model_state_output_path=MODEL_STATE)"
+      ],
+      "metadata": {
+        "id": "QNLbrXfEYtZP"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Named tuple to store the number of good, bad and medium quality samples in a window\n",
+        "class MilkQualityAggregation(NamedTuple):\n",
+        "  bad_quality_measurements: int\n",
+        "  medium_quality_measurements: int\n",
+        "  high_quality_measurements: int"
+      ],
+      "metadata": {
+        "colab": {
+          "base_uri": "https://localhost:8080/",
+          "height": 240
+        },
+        "id": "1RcM3F66aqSU",
+        "outputId": "84618293-01a1-473f-9cd0-c8b4e88eb3f3"
+      },
+      "execution_count": null,
+      "outputs": [
+        {
+          "output_type": "error",
+          "ename": "NameError",
+          "evalue": "ignored",
+          "traceback": [
+            "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+            "\u001b[0;31mNameError\u001b[0m                                 Traceback (most recent call last)",
+            "\u001b[0;32m<ipython-input-4-3cabe047e753>\u001b[0m in \u001b[0;36m<cell line: 2>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[0;31m# Named tuple to store the number of good, bad and medium quality samples in a window\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32mclass\u001b[0m \u001b[0mMilkQualityAggregation\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mNamedTuple\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m      3\u001b[0m   \u001b[0mbad_quality_measurements\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mint\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      4\u001b[0m   \u001b[0mmedium_quality_measurements\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mint\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      5\u001b[0m   \u001b[0mhigh_quality_measurements\u001b[0m\u001b[0;34m:\u001
 b[0m \u001b[0mint\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+            "\u001b[0;31mNameError\u001b[0m: name 'NamedTuple' is not defined"
+          ]
+        }
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Helper CombineFn to aggregate the results of a window, the function keeps track of the number of good, bad and medium quality samples in the stream "

Review Comment:
   Could you split this into a title and brief descriptive text (title could be something like: `Count the samples by quality for each window`



##########
examples/notebooks/beam-ml/run_inference_windowing.ipynb:
##########
@@ -0,0 +1,477 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": []
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "K2MpsIa-ncMZ"
+      },
+      "outputs": [],
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ]
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Apache Beam RunInference Windowing Example\n",
+        "\n",
+        "<table align=\"left\">\n",
+        "  <td>\n",
+        "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_windowing.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/colab_32px.png\" />Run in Google Colab</a>\n",
+        "  </td>\n",
+        "  <td>\n",
+        "    <a target=\"_blank\" href=\"https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_windowing.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/github_32px.png\" />View source on GitHub</a>\n",
+        "  </td>\n",
+        "</table>\n"
+      ],
+      "metadata": {
+        "id": "fKxfINuCPsh9"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "This notebook demonstrates the use of the RunInference transform together with windowing in a streaming pipeline. The pipeline predicts the quality of milk samples and classifies them as 'good', 'bad' or 'medium'. The predictions for each window are then aggregated. The pipeline makes use of the XGBoost model handler. For more information about the RunInference API, see the [Machine Learning section of the Apache Beam documentation](https://beam.apache.org/documentation/ml/overview/).\n",

Review Comment:
   Could you add a note about why windowing is useful (e.g. for getting results within a particular time window to see trends and getting intermediate results without needing to wait for all the data to process). Also, maybe link `windowing` to https://beam.apache.org/documentation/programming-guide/#windowing



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org