You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by "AnandInguva (via GitHub)" <gi...@apache.org> on 2023/03/30 18:50:49 UTC

[GitHub] [beam] AnandInguva opened a new pull request, #26048: Auto model updates notebook

AnandInguva opened a new pull request, #26048:
URL: https://github.com/apache/beam/pull/26048

   **Please** add a meaningful description for your change here
   
   ------------------------
   
   Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
   
    - [ ] Mention the appropriate issue in your description (for example: `addresses #123`), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment `fixes #<ISSUE NUMBER>` instead.
    - [ ] Update `CHANGES.md` with noteworthy changes.
    - [ ] If this contribution is large, please file an Apache [Individual Contributor License Agreement](https://www.apache.org/licenses/icla.pdf).
   
   See the [Contributor Guide](https://beam.apache.org/contribute) for more tips on [how to make review process smoother](https://beam.apache.org/contribute/get-started-contributing/#make-the-reviewers-job-easier).
   
   To check the build health, please visit [https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md](https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md)
   
   GitHub Actions Tests Status (on master branch)
   ------------------------------------------------------------------------------------------------
   [![Build python source distribution and wheels](https://github.com/apache/beam/workflows/Build%20python%20source%20distribution%20and%20wheels/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Build+python+source+distribution+and+wheels%22+branch%3Amaster+event%3Aschedule)
   [![Python tests](https://github.com/apache/beam/workflows/Python%20tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Python+Tests%22+branch%3Amaster+event%3Aschedule)
   [![Java tests](https://github.com/apache/beam/workflows/Java%20Tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Java+Tests%22+branch%3Amaster+event%3Aschedule)
   [![Go tests](https://github.com/apache/beam/workflows/Go%20tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Go+tests%22+branch%3Amaster+event%3Aschedule)
   
   See [CI.md](https://github.com/apache/beam/blob/master/CI.md) for more information about GitHub Actions CI.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on pull request #26048: Auto model updates notebook

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on PR #26048:
URL: https://github.com/apache/beam/pull/26048#issuecomment-1492430489

   cc: @rezarokni 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on pull request #26048: Auto model updates notebook

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on PR #26048:
URL: https://github.com/apache/beam/pull/26048#issuecomment-1495848090

   R: @damccorm 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] github-actions[bot] commented on pull request #26048: Auto model updates notebook

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on PR #26048:
URL: https://github.com/apache/beam/pull/26048#issuecomment-1492077789

   Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] rszper commented on a diff in pull request #26048: Auto model updates notebook

Posted by "rszper (via GitHub)" <gi...@apache.org>.
rszper commented on code in PR #26048:
URL: https://github.com/apache/beam/pull/26048#discussion_r1154695177


##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"

Review Comment:
   ```suggestion
           "# Update ML models in running pipelines"
   ```
   For the notebook, I think we should make the title a bit broader.



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"

Review Comment:
   ```suggestion
           "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."

Review Comment:
   ```suggestion
           "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",

Review Comment:
   ```suggestion
           "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",

Review Comment:
   ```suggestion
           "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",

Review Comment:
   ```suggestion
           "# Install the pipeline dependencies on Dataflow.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",

Review Comment:
   ```suggestion
           "## Before you begin\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",

Review Comment:
   ```suggestion
           "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",

Review Comment:
   ```suggestion
           "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",

Review Comment:
   ```suggestion
           "Install the dependencies required to run this notebook.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",

Review Comment:
   ```suggestion
           "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",

Review Comment:
   ```suggestion
           "# Provide required pipeline options for the Dataflow Runner.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",

Review Comment:
   ```suggestion
           "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",

Review Comment:
   ```suggestion
           "# Authenticate to your Google Cloud account.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"

Review Comment:
   ```suggestion
           "### Pipeline steps\n"
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",

Review Comment:
   ```suggestion
           "## Runner\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodImpulse`, which emits output every `n` seconds `PeriodicImpulse` transform generates an infinite sequence of elements with given runtime interval.\n",
+        "\n",
+        "  We use `PeriodicImpulse` in this notebook to mimic the `Pub/Sub` source. Since the inputs in a streaming pipleine arrives in intervals, we use `PeriodicImpulse` to output element at `m` intervals.\n",
+        "\n",
+        "To learn more about PeriodicImpulse, please take a look at the [code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)"

Review Comment:
   ```suggestion
           "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."

Review Comment:
   ```suggestion
           "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",

Review Comment:
   ```suggestion
           "# Set the project to the default project in your current Google Cloud environment.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodImpulse`, which emits output every `n` seconds `PeriodicImpulse` transform generates an infinite sequence of elements with given runtime interval.\n",
+        "\n",
+        "  We use `PeriodicImpulse` in this notebook to mimic the `Pub/Sub` source. Since the inputs in a streaming pipleine arrives in intervals, we use `PeriodicImpulse` to output element at `m` intervals.\n",
+        "\n",
+        "To learn more about PeriodicImpulse, please take a look at the [code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)"
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. Read and pre-process the images using the `read_image` function. For this notebook, we will be using `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the beam pipeline. We will use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://your-bucket/*.h5'`.\n",
+        "\n",
+        "  **How to watch for auto model update**\n",
+        "\n",
+        "  After the pipeline starts processing data and when you see some outputs emitted from the RunInference `PTransform`, upload a `.h5` `TensorFlow` model(for example, [resnet_152](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5)) that matches the `file_pattern` to the Google Cloud Storage bucket. RunInference will update the `model_uri` of `TFModelHandlerTensor` using `WatchFilePattern` as a side input.\n"

Review Comment:
   ```suggestion
           "  After the pipeline starts processing data and when you see output emitted from the RunInference `PTransform`, upload a `.h5` `TensorFlow` model (for example, [resnet_152](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5)) that matches the `file_pattern` to the Google Cloud Storage bucket. RunInference uses `WatchFilePattern` as a side input to update the `model_uri` of `TFModelHandlerTensor`.\n"
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",

Review Comment:
   ```suggestion
           "# Set the Google Cloud region that you want to run Dataflow in.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodImpulse`, which emits output every `n` seconds `PeriodicImpulse` transform generates an infinite sequence of elements with given runtime interval.\n",
+        "\n",
+        "  We use `PeriodicImpulse` in this notebook to mimic the `Pub/Sub` source. Since the inputs in a streaming pipleine arrives in intervals, we use `PeriodicImpulse` to output element at `m` intervals.\n",
+        "\n",
+        "To learn more about PeriodicImpulse, please take a look at the [code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)"
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. Read and pre-process the images using the `read_image` function. For this notebook, we will be using `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the beam pipeline. We will use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://your-bucket/*.h5'`.\n",
+        "\n",
+        "  **How to watch for auto model update**\n",
+        "\n",
+        "  After the pipeline starts processing data and when you see some outputs emitted from the RunInference `PTransform`, upload a `.h5` `TensorFlow` model(for example, [resnet_152](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5)) that matches the `file_pattern` to the Google Cloud Storage bucket. RunInference will update the `model_uri` of `TFModelHandlerTensor` using `WatchFilePattern` as a side input.\n"
+      ],
+      "metadata": {
+        "id": "eB0-ewd-BCKE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        " # side input used to watch for .h5 file and auto update the model_uri of the TFModelHandlerTensor.\n",
+        " file_pattern = 'gs://your-bucket/*.h5'\n",
+        "  side_input_pcoll = (\n",
+        "      pipeline\n",
+        "      | \"WatchFilePattern\" >> WatchFilePattern(file_pattern=file_pattern,\n",
+        "                                                interval=side_input_fire_interval,\n",
+        "                                                stop_timestamp=end_timestamp))\n",
+        " inferences = (\n",
+        "     image_data\n",
+        "     | \"ApplyWindowing\" >> beam.WindowInto(beam.window.FixedWindows(10))\n",
+        "     | \"RunInference\" >> RunInference(model_handler=model_handler,\n",
+        "                                      model_metadata_pcoll=side_input_pcoll))"
+      ],
+      "metadata": {
+        "id": "_AjvvexJ_hUq"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "4. Post-process `PredictionResult` object.\n",
+        "\n",
+        "  When the inference is complete, RunInference outputs a `PredictionResult` object that contains `example`, `inference`, and `model_id` fields. The `model_id` is used to identify which model is used for running the inference. The `PostProcessor` returns the predicted label and the model_id used to run the inference on the predicted label."

Review Comment:
   ```suggestion
           "  When the inference is complete, RunInference outputs a `PredictionResult` object that contains the fields `example`, `inference`, and `model_id`. The `model_id` field is used to identify which model is used for running the inference. The `PostProcessor` returns the predicted label and the model ID used to run the inference on the predicted label."
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodImpulse`, which emits output every `n` seconds `PeriodicImpulse` transform generates an infinite sequence of elements with given runtime interval.\n",
+        "\n",
+        "  We use `PeriodicImpulse` in this notebook to mimic the `Pub/Sub` source. Since the inputs in a streaming pipleine arrives in intervals, we use `PeriodicImpulse` to output element at `m` intervals.\n",
+        "\n",
+        "To learn more about PeriodicImpulse, please take a look at the [code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)"
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. Read and pre-process the images using the `read_image` function. For this notebook, we will be using `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the beam pipeline. We will use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://your-bucket/*.h5'`.\n",
+        "\n",
+        "  **How to watch for auto model update**\n",
+        "\n",
+        "  After the pipeline starts processing data and when you see some outputs emitted from the RunInference `PTransform`, upload a `.h5` `TensorFlow` model(for example, [resnet_152](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5)) that matches the `file_pattern` to the Google Cloud Storage bucket. RunInference will update the `model_uri` of `TFModelHandlerTensor` using `WatchFilePattern` as a side input.\n"
+      ],
+      "metadata": {
+        "id": "eB0-ewd-BCKE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        " # side input used to watch for .h5 file and auto update the model_uri of the TFModelHandlerTensor.\n",

Review Comment:
   ```suggestion
           " # The side input used to watch for the .h5 file and update the model_uri of the TFModelHandlerTensor.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodImpulse`, which emits output every `n` seconds `PeriodicImpulse` transform generates an infinite sequence of elements with given runtime interval.\n",
+        "\n",
+        "  We use `PeriodicImpulse` in this notebook to mimic the `Pub/Sub` source. Since the inputs in a streaming pipleine arrives in intervals, we use `PeriodicImpulse` to output element at `m` intervals.\n",
+        "\n",
+        "To learn more about PeriodicImpulse, please take a look at the [code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)"
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. Read and pre-process the images using the `read_image` function. For this notebook, we will be using `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the beam pipeline. We will use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://your-bucket/*.h5'`.\n",
+        "\n",
+        "  **How to watch for auto model update**\n",
+        "\n",
+        "  After the pipeline starts processing data and when you see some outputs emitted from the RunInference `PTransform`, upload a `.h5` `TensorFlow` model(for example, [resnet_152](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5)) that matches the `file_pattern` to the Google Cloud Storage bucket. RunInference will update the `model_uri` of `TFModelHandlerTensor` using `WatchFilePattern` as a side input.\n"
+      ],
+      "metadata": {
+        "id": "eB0-ewd-BCKE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        " # side input used to watch for .h5 file and auto update the model_uri of the TFModelHandlerTensor.\n",
+        " file_pattern = 'gs://your-bucket/*.h5'\n",
+        "  side_input_pcoll = (\n",
+        "      pipeline\n",
+        "      | \"WatchFilePattern\" >> WatchFilePattern(file_pattern=file_pattern,\n",
+        "                                                interval=side_input_fire_interval,\n",
+        "                                                stop_timestamp=end_timestamp))\n",
+        " inferences = (\n",
+        "     image_data\n",
+        "     | \"ApplyWindowing\" >> beam.WindowInto(beam.window.FixedWindows(10))\n",
+        "     | \"RunInference\" >> RunInference(model_handler=model_handler,\n",
+        "                                      model_metadata_pcoll=side_input_pcoll))"
+      ],
+      "metadata": {
+        "id": "_AjvvexJ_hUq"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "4. Post-process `PredictionResult` object.\n",

Review Comment:
   ```suggestion
           "4. Post-process the `PredictionResult` object.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodImpulse`, which emits output every `n` seconds `PeriodicImpulse` transform generates an infinite sequence of elements with given runtime interval.\n",
+        "\n",
+        "  We use `PeriodicImpulse` in this notebook to mimic the `Pub/Sub` source. Since the inputs in a streaming pipleine arrives in intervals, we use `PeriodicImpulse` to output element at `m` intervals.\n",
+        "\n",
+        "To learn more about PeriodicImpulse, please take a look at the [code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)"
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. Read and pre-process the images using the `read_image` function. For this notebook, we will be using `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the beam pipeline. We will use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://your-bucket/*.h5'`.\n",
+        "\n",
+        "  **How to watch for auto model update**\n",

Review Comment:
   ```suggestion
           "  **How to watch for the automatic model update**\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodImpulse`, which emits output every `n` seconds `PeriodicImpulse` transform generates an infinite sequence of elements with given runtime interval.\n",
+        "\n",
+        "  We use `PeriodicImpulse` in this notebook to mimic the `Pub/Sub` source. Since the inputs in a streaming pipleine arrives in intervals, we use `PeriodicImpulse` to output element at `m` intervals.\n",
+        "\n",
+        "To learn more about PeriodicImpulse, please take a look at the [code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)"
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. Read and pre-process the images using the `read_image` function. For this notebook, we will be using `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the beam pipeline. We will use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://your-bucket/*.h5'`.\n",
+        "\n",
+        "  **How to watch for auto model update**\n",
+        "\n",
+        "  After the pipeline starts processing data and when you see some outputs emitted from the RunInference `PTransform`, upload a `.h5` `TensorFlow` model(for example, [resnet_152](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5)) that matches the `file_pattern` to the Google Cloud Storage bucket. RunInference will update the `model_uri` of `TFModelHandlerTensor` using `WatchFilePattern` as a side input.\n"
+      ],
+      "metadata": {
+        "id": "eB0-ewd-BCKE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        " # side input used to watch for .h5 file and auto update the model_uri of the TFModelHandlerTensor.\n",
+        " file_pattern = 'gs://your-bucket/*.h5'\n",
+        "  side_input_pcoll = (\n",
+        "      pipeline\n",
+        "      | \"WatchFilePattern\" >> WatchFilePattern(file_pattern=file_pattern,\n",
+        "                                                interval=side_input_fire_interval,\n",
+        "                                                stop_timestamp=end_timestamp))\n",
+        " inferences = (\n",
+        "     image_data\n",
+        "     | \"ApplyWindowing\" >> beam.WindowInto(beam.window.FixedWindows(10))\n",
+        "     | \"RunInference\" >> RunInference(model_handler=model_handler,\n",
+        "                                      model_metadata_pcoll=side_input_pcoll))"
+      ],
+      "metadata": {
+        "id": "_AjvvexJ_hUq"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "4. Post-process `PredictionResult` object.\n",
+        "\n",
+        "  When the inference is complete, RunInference outputs a `PredictionResult` object that contains `example`, `inference`, and `model_id` fields. The `model_id` is used to identify which model is used for running the inference. The `PostProcessor` returns the predicted label and the model_id used to run the inference on the predicted label."
+      ],
+      "metadata": {
+        "id": "lTA4wRWNDVis"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "post_processor = (\n",
+        "    inferences\n",
+        "    | \"PostProcessResults\" >> beam.ParDo(PostProcessor())\n",
+        "    | \"LogResults\" >> beam.Map(logging.info))"
+      ],
+      "metadata": {
+        "id": "9TB76fo-_vZJ"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Run the pipeline"
+      ],
+      "metadata": {
+        "id": "_ty03jDnKdKR"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# run the pipeline\n",

Review Comment:
   ```suggestion
           "# Run the pipeline.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",

Review Comment:
   ```suggestion
           "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",

Review Comment:
   ```suggestion
           " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",

Review Comment:
   ```suggestion
           "# Write the depencies to the requirements file.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",

Review Comment:
   ```suggestion
           "# In a requirements file, define the dependencies required for the pipeline.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."

Review Comment:
   ```suggestion
           " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",

Review Comment:
   ```suggestion
           "Next, review the pipeline steps and examine the code.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."

Review Comment:
   ```suggestion
           "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodImpulse`, which emits output every `n` seconds `PeriodicImpulse` transform generates an infinite sequence of elements with given runtime interval.\n",
+        "\n",
+        "  We use `PeriodicImpulse` in this notebook to mimic the `Pub/Sub` source. Since the inputs in a streaming pipleine arrives in intervals, we use `PeriodicImpulse` to output element at `m` intervals.\n",

Review Comment:
   ```suggestion
           "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrives in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodImpulse`, which emits output every `n` seconds `PeriodicImpulse` transform generates an infinite sequence of elements with given runtime interval.\n",

Review Comment:
   ```suggestion
           "1. Create a `PeriodicImpulse`, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",

Review Comment:
   ```suggestion
           "# Define the pipeline object.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodImpulse`, which emits output every `n` seconds `PeriodicImpulse` transform generates an infinite sequence of elements with given runtime interval.\n",
+        "\n",
+        "  We use `PeriodicImpulse` in this notebook to mimic the `Pub/Sub` source. Since the inputs in a streaming pipleine arrives in intervals, we use `PeriodicImpulse` to output element at `m` intervals.\n",
+        "\n",
+        "To learn more about PeriodicImpulse, please take a look at the [code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)"
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. Read and pre-process the images using the `read_image` function. For this notebook, we will be using `Cat-with-beanie.jpg` for all inferences."

Review Comment:
   ```suggestion
           "2. To read and pre-process the images, use the `read_image` function. This example uses `Cat-with-beanie.jpg` for all inferences."
   ```
   
   Do we have the license to use that image on DevSite? For the other notebooks, we had to swap out the images.



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Use WatchFilePattern to auto-update ML models in RunInference"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a `RunInference` PTransform to run inference on images using TensorFlow models. It uses a side input PCollection that emits `ModelMetadata` to update the model.\n",
+        "\n",
+        "Using side inputs, you can update your model (which is passed in a ModelHandler configuration object) in real-time, even while the Beam pipeline is still running. This can be done either by leveraging one of Beam's provided patterns, such as the WatchFilePattern, or by configuring a custom side input PCollection that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the Side inputs section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This notebook uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the `RunInference` PTransform to automatically update the ML model without stopping the Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "### Before you begin\n",
+        "Install the necessary dependencies that are used to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# authenticate to your gcp account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Runner\n",
+        "\n",
+        "We will run this pipeline on `DataflowRunner`. Please make sure you have all the required permissions to run the pipeline on `Dataflow`.\n",
+        "\n",
+        "Now, we will onfigure the pipeline options for the pipeline to run on Dataflow. Make sure the streaming mode is on for this pipeline."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# provide required pipeline options for DataflowRunner\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Sets the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Sets the Google Cloud Region in which Cloud Dataflow runs.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT! Adjust the following to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "We need to install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. We can pass them via `requirements_file` pipeline option."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define dependencies in a requirements file required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# write the depencies to a requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# the pipeline needs dependencies needed to be installed on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " In this notebook, we will use `TFModelHandlerTensor` as the ModelHandler. We will use `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5 and place it in a directory that you would use to auto model updates."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "To run the inference, we need to read the image and convert it into Tensorflow Tensor. We can do this using `preprocess_image` below."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# define pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Now, let's jump into the pipeline code.\n",
+        "\n",
+        "**Pipeline steps**:\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodImpulse`, which emits output every `n` seconds `PeriodicImpulse` transform generates an infinite sequence of elements with given runtime interval.\n",
+        "\n",
+        "  We use `PeriodicImpulse` in this notebook to mimic the `Pub/Sub` source. Since the inputs in a streaming pipleine arrives in intervals, we use `PeriodicImpulse` to output element at `m` intervals.\n",
+        "\n",
+        "To learn more about PeriodicImpulse, please take a look at the [code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)"
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. Read and pre-process the images using the `read_image` function. For this notebook, we will be using `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the beam pipeline. We will use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://your-bucket/*.h5'`.\n",

Review Comment:
   ```suggestion
           "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the Apache Beam pipeline. Use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://your-bucket/*.h5'`.\n",
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on a diff in pull request #26048: Auto model updates notebook

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on code in PR #26048:
URL: https://github.com/apache/beam/pull/26048#discussion_r1160057830


##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://BUCKET_NAME/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the dependencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://BUCKET_NAME/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse` transform, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrive in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. To read and pre-process the images, use the `read_image` function. This example uses `Cat-with-beanie.jpg` for all inferences."

Review Comment:
   Added.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #26048: Auto model updates notebook

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #26048:
URL: https://github.com/apache/beam/pull/26048#discussion_r1160236029


##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,475 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://BUCKET_NAME/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the dependencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet as our initial model used for inference.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://BUCKET_NAME/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse` transform, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrive in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse (will run for 20 minutes).\n",
+        "main_input_fire_interval = 60 # interval in seconds at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval in seconds at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. To read and pre-process the images, use the `read_image` function. This example uses `Cat-with-beanie.jpg` for all inferences.\n",
+        "\n",
+        "  **Note**: Image used for prediction is licensed in CC-BY, creator in listed in the [LICENSE.txt](https://storage.googleapis.com/apache-beam-samples/image_captioning/LICENSE.txt) file."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "![download.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAOAAAADgCAIAAACVT/22AAAKMWlDQ1BJQ0MgUHJvZmlsZQAAeJydlndUU9kWh8+9N71QkhCKlNBraFICSA29SJEuKjEJEErAkAAiNkRUcERRkaYIMijggKNDkbEiioUBUbHrBBlE1HFwFBuWSWStGd+8ee/Nm98f935rn73P3Wfvfda6AJD8gwXCTFgJgAyhWBTh58WIjYtnYAcBDPAAA2wA4HCzs0IW+EYCmQJ82IxsmRP4F726DiD5+yrTP4zBAP+flLlZIjEAUJiM5/L42VwZF8k4PVecJbdPyZi2NE3OMErOIlmCMlaTc/IsW3z2mWUPOfMyhDwZy3PO4mXw5Nwn4405Er6MkWAZF+cI+LkyviZjg3RJhkDGb+SxGXxONgAoktwu5nNTZGwtY5IoMoIt43kA4EjJX/DSL1jMzxPLD8XOzFouEiSniBkmXFOGjZMTi+HPz03ni8XMMA43jSPiMdiZGVkc4XIAZs/8WRR5bRmyIjvYODk4MG0tbb4o1H9d/JuS93aWXoR/7hlEH/jD9ld+mQ0AsKZltdn6h21pFQBd6wFQu/2HzWAvAIqyvnUOfXEeunxeUsTiLGcrq9zcXEsBn2spL+jv+p8Of0NffM9Svt3v5WF485M4knQxQ143bmZ6pkTEyM7icPkM5p+H+B8H/nUeFhH8JL6IL5RFRMumTCBMlrVbyBOIBZlChkD4n5r4D8P+pNm5lona+BHQllgCpSEaQH4eACgqESAJe2Qr0O99C8ZHA/nNi9GZmJ37z4L+fVe4TP7IFiR/jmNHRDK4ElHO7Jr8WgI0IABFQAPqQBvoAxPABLbAEbgAD+ADAkEoiARxYDHgghSQAUQgFxSAtaAYlIKtYCeoBnWgETSDNnAYdIFj4DQ4By6By2AE3AFSMA6egCnwCsxAEISFyBAVUo
 d0IEPIHLKFWJAb5AMFQxFQHJQIJUNCSAIVQOugUqgcqobqoWboW+godBq6AA1Dt6BRaBL6FXoHIzAJpsFasBFsBbNgTzgIjoQXwcnwMjgfLoK3wJVwA3wQ7oRPw5fgEVgKP4GnEYAQETqiizARFsJGQpF4JAkRIauQEqQCaUDakB6kH7mKSJGnyFsUBkVFMVBMlAvKHxWF4qKWoVahNqOqUQdQnag+1FXUKGoK9RFNRmuizdHO6AB0LDoZnYsuRlegm9Ad6LPoEfQ4+hUGg6FjjDGOGH9MHCYVswKzGbMb0445hRnGjGGmsVisOtYc64oNxXKwYmwxtgp7EHsSewU7jn2DI+J0cLY4X1w8TogrxFXgWnAncFdwE7gZvBLeEO+MD8Xz8MvxZfhGfA9+CD+OnyEoE4wJroRIQiphLaGS0EY4S7hLeEEkEvWITsRwooC4hlhJPEQ8TxwlviVRSGYkNimBJCFtIe0nnSLdIr0gk8lGZA9yPFlM3kJuJp8h3ye/UaAqWCoEKPAUVivUKHQqXFF4pohXNFT0VFysmK9YoXhEcUjxqRJeyUiJrcRRWqVUo3RU6YbStDJV2UY5VDlDebNyi/IF5UcULMWI4kPhUYoo+yhnKGNUhKpPZVO51HXURupZ6jgNQzOmBdBSaaW0b2iDtCkVioqdSrRKnkqNynEVKR2hG9ED6On0Mvph+nX6O1UtVU9Vvuom1TbVK6qv1eaoeajx1UrU2tVG1N6pM9R91NPUt6l3qd/TQGmYaYRr5Grs0Tir8XQObY7LHO6ckjmH59zWhDXNNCM0V2ju0xzQnNbS1vLTytKq0jqj9VSbru2hnaq9Q/uE9qQOVcdNR6CzQ+ekzmOGCsOTkc6oZPQxpnQ1df11Jbr1uoO6M3rGelF6hXrtevf0Cfos/ST9Hfq9+lMGOgYhBgUGrQa3DfGGLMMUw12G/YavjYyNYow2GHUZPTJWMw4wzjduNb5rQjZxN1lm0mByzRR
 jyjJNM91tetkMNrM3SzGrMRsyh80dzAXmu82HLdAWThZCiwaLG0wS05OZw2xljlrSLYMtCy27LJ9ZGVjFW22z6rf6aG1vnW7daH3HhmITaFNo02Pzq62ZLde2xvbaXPJc37mr53bPfW5nbse322N3055qH2K/wb7X/oODo4PIoc1h0tHAMdGx1vEGi8YKY21mnXdCO3k5rXY65vTW2cFZ7HzY+RcXpkuaS4vLo3nG8/jzGueNueq5clzrXaVuDLdEt71uUnddd457g/sDD30PnkeTx4SnqWeq50HPZ17WXiKvDq/XbGf2SvYpb8Tbz7vEe9CH4hPlU+1z31fPN9m31XfKz95vhd8pf7R/kP82/xsBWgHcgOaAqUDHwJWBfUGkoAVB1UEPgs2CRcE9IXBIYMj2kLvzDecL53eFgtCA0O2h98KMw5aFfR+OCQ8Lrwl/GGETURDRv4C6YMmClgWvIr0iyyLvRJlESaJ6oxWjE6Kbo1/HeMeUx0hjrWJXxl6K04gTxHXHY+Oj45vipxf6LNy5cDzBPqE44foi40V5iy4s1licvvj4EsUlnCVHEtGJMYktie85oZwGzvTSgKW1S6e4bO4u7hOeB28Hb5Lvyi/nTyS5JpUnPUp2Td6ePJninlKR8lTAFlQLnqf6p9alvk4LTduf9ik9Jr09A5eRmHFUSBGmCfsytTPzMoezzLOKs6TLnJftXDYlChI1ZUPZi7K7xTTZz9SAxESyXjKa45ZTk/MmNzr3SJ5ynjBvYLnZ8k3LJ/J9879egVrBXdFboFuwtmB0pefK+lXQqqWrelfrry5aPb7Gb82BtYS1aWt/KLQuLC98uS5mXU+RVtGaorH1futbixWKRcU3NrhsqNuI2ijYOLhp7qaqTR9LeCUXS61LK0rfb+ZuvviVzVeVX33akrRlsMyhbM9WzFbh1uvb3LcdKFcuzy8f2x6yvXMHY0fJjpc7l+y8UGFXUbeLsEuyS1oZXNldZVC1
 tep9dUr1SI1XTXutZu2m2te7ebuv7PHY01anVVda926vYO/Ner/6zgajhop9mH05+x42Rjf2f836urlJo6m06cN+4X7pgYgDfc2Ozc0tmi1lrXCrpHXyYMLBy994f9Pdxmyrb6e3lx4ChySHHn+b+O31w0GHe4+wjrR9Z/hdbQe1o6QT6lzeOdWV0iXtjusePhp4tLfHpafje8vv9x/TPVZzXOV42QnCiaITn07mn5w+lXXq6enk02O9S3rvnIk9c60vvG/wbNDZ8+d8z53p9+w/ed71/LELzheOXmRd7LrkcKlzwH6g4wf7HzoGHQY7hxyHui87Xe4Znjd84or7ldNXva+euxZw7dLI/JHh61HXb95IuCG9ybv56Fb6ree3c27P3FlzF3235J7SvYr7mvcbfjT9sV3qID0+6j068GDBgztj3LEnP2X/9H686CH5YcWEzkTzI9tHxyZ9Jy8/Xvh4/EnWk5mnxT8r/1z7zOTZd794/DIwFTs1/lz0/NOvm1+ov9j/0u5l73TY9P1XGa9mXpe8UX9z4C3rbf+7mHcTM7nvse8rP5h+6PkY9PHup4xPn34D94Tz+6TMXDkAAQAASURBVHichP3Xt2xJeh+IfV9EbJs+8/hzvalbt3x1tQe70WgSAAnQDClySGo4miXxbV70oLX0b0hr6UEPkkZrRmuRHJrhiAAJEoYwDaIb7dDl63pzzr3Hn/SZ20TEp4cwe+e5RSq7+p4020R88YvfZyM2/rN/9zsMERAAgAgAABEBAADcH/t39Ut/MCABIBIRgL0OIhIRIgIRIgAgubPMO3MUoX1P5kJE1V0QQWtg6C9KRBcaYP+PSEQEhHChnQSABAQEgIgIQIT2eEB3O+1awMwJ/nxC00CC//8vc1bVPgJ/Ipo2ABARs6Kyn22/AAgIEY1k3O3Inou+D/Y3K9gvawTCRVEQuS4wrLXKtsi33bZKmxPJDgfU5I5V27B2Cd8OTVZkHhdE2ouyd
 iBBbSjt+1Upk/vB/BVGMmgHnGzLDODMsPqPtfFAspgjcPK4iGvzHpFIuzv4MXCDY2/nYe0HAAEI0c4D/6sTKVENjgTMSXlFHnYc0UCVABmiF3c1Ydz40MooVB+qMas65GaWux4gkiZEtOCwLQRw6ERE8211LS8tRNDmW6yBD8HInqpbmIbY8UIrPCsef6wfXyAgI3QAIIsgK2QkM9DaSd53zQqPwLcEEQE0EFY0U8nCDiAB1qQO1a0MTJlFnAcUMjKD4MZ/RS5oIWxELTwy0FEa1DBOfgo5jJKdI4YgbfcJ7YXRCsl1zw6UQ4oXMZHrC/l54W9UcWrVNqBao9HPInBU5KYu1eRMrrNYURzWoWbpybUKyE2SCql2kLz47Im0Ipn6qNUmT4XBSuLVqDg025G9OPqORGsaDP0x5KBG6IjTH2PbSVBrl1Vz4JkT3fRZuZvneys4cNIwI7cqHKxRVG1WVw1ZYU2wDcKKIiwFoMEnICPSgCunIyKrtbDSUQjAABkBA2CrOp4BMsd/tdlee2deBEjEEMlCzoPeAdad4KYa2RNrbbBvCMxVGCIAMEMClhgJALVlZXSzYHWeIVYD7fSv/89+rBG5GQD0TbCnIFZIc+A17SVws6zqnZWQ66vnJY8Nx4U1jWwJr5oDQNWguFPs0NQEVaETERkgq3Wypp+djqxdzmCsrhsBfAvAmW019UzGgvAz58I5Ncl7QToGcx3XRhV4kwYAPMUiYk3otoNsFT1VOwmtfMndiqG/rMcSrp7lB8sKjFzH6iDwl6gI6UuuUGu/w0DtG4sL5i5DBscAjKzSR6ia6AYVEIChRTw4c8LJDo2OxNW2WCPJcXBl7dSGhyGaMWJ2grguERhTyCLP/0Z+9Kz0EYDVeL4aXrTWQdVpOzxeSismqYGPvVmNxqoJWZuZUB+Xumirc+qnrwiGKmvCWlDVvfwkdV/WXBwPGnO+QZmG+ssSn0Ukq+upisIqAgGOyFYnWgUXrJ
 1iIeJAzBz+aq32H/3twLLIxQasYhrN6FUidqNnNaZrngZS3iCqAYgANGntNQTWpofnYSBk9tJ1TNSuY8zIWjdq4+UsGqiEazkU681GNzLOdKumnZkJiMzCnGEFBqjo0A+Bx5GXniNjWuFdcy5WcPFS8K/qSwcuzwf1Qb9gNKxcyp5BNYVPABqcMVqHZiXBmuKtdcga60YoosYOq0aVtfB8c1bHxBmK5sW8MYe2gXXLFZmf+mS42YKPVgyl+ssZtbY1K9Kxk9fbwn4QrMeqrZvlrSkvddMCc5gb1pp2JyIEBpW5y6xBaZFMTpOvkqzzCqp56uRLTom4A9C2k/nW1k6z56wwaG0M7W211hZUXvwV8/pjV9w4g1qnaMFRwoXxRCIC7ZtcXRj8vZz/QLXWVoEC8AyP9U9o5zwQMYQqMOImVTVBViaDa5aAyouvjCBy6s9JcGWWgcOhJ27vXWG9LW7swPlADJCM2UtQ98l8c6GmdiuIr0qp1iEfgbH2KzmWZYCOL11YgBzHOF+ypqexGjE3sM6tQCsEz9++PWYK4IVGec+ZauC2SHFNNW8RVnmoenl311JZxY6VqnGMwLzbsaKjKvb1DSATs7pgV6LDiGs2eLD6sSB/oO945dza+9QvB9oaOwBE2uk7BjbmgH5GW22+QtXoohOWHpnviTV2KvMIkIDXBsTadVS11MCXMXZBzI7SsS4JRLQxCERC0CuTpzY2F6/jfqkrW/uNu4kdQj+RUJtfGTorwnXLdhGsCrCGkJ916K4LjsTRqW/ndtSaWtOkNaPMDicyrNxLE2IxHffidFJG31Nng3ioOWPSXdtLYGUaW0l/Cda9MWrOXiHZVWiRNtY2klUg1S2qLoDzkwDAxZjQAcO1zzRF+wFBZIjcdquunA3EaLWNKz8iAoiqM/a+tckH4AXqNYUNctcutjKDa7EJ86WdIXZeQCVuhAtT35zAvNCdiC2B+fiEH+GaO+mJ34LVaQFnTlrNhZV
 ycghw4q8UGkPSNrLt4vxewyAjy40rto8lW9MjYoZcrUK0POx64Qfcwh2p0qoWqz7wSXRhUNz9nD1qcWdMCh+/gy9LW1zsdKUhLZPVLuvGzjWz+saLxKgCN6CWC8kHRKsGG8TrSuujuxR5htJfpkdsX4RVjlXTyPN+TQtaIjF6AgCZ0eyvqoNX57E7wrET1rBQYyQHYAepSjDoEj+V2nOdrIte12TvXWcrympkVsOQWL+5HWhy0RwN2t3Qm6FARMZjW417VPepUbQzvmoWoe2XYStHMdWEdKoP/HxzXfSZOh/5r+59AbXGeKpM/BWXyd6uzvUrP7tb1O0qrExFGzdwTcVqNvmsIfPsVHXFslLdEvDviFZasNJUcEThCLHS7g4iNcohx4jM36PGczUX114daxqhjkj06SqyQR8A67QyP0iGh4x0GVKlwoyRt8oPNU0F4IOFYJzy2pFkjRQ3jk43oTPprMZyagGt+nQhN9+FmhTdl4DMRWqrcUArTJ9W8m11t8XVy6ATpj/fDS0iAOgVR57cXK6NN9p0gxuzGgM64gRwwS+y2qyKpNpZ6uaSmYy1/tbEYBR0TSPZ3lZQrOYB1c6n6gh8BZyI9T8AAKKmD9AzXO0M52QAAoB28LHIsdqkAqBpen2ae3qw/OS8GXsNp0BruRDbM9TE0Sejq9/QBSMd0Xg95Nvm3WWnRV2a2ArJd5+qP479UFfBkUoEULu0U/KV6oNVcw1rJwEAECGgrvl8tr8rg+0ZF6imZ8gRmLt3pcfrkwOq36nGpeD6i0RU5THtMNiuX3AELjLlBS1fM4tesQqs0q4JrlID5BSBgzV4lWHvYnNGK3RGRKI+XLUqA0AiYnaO+ytWt6GaCqvBxxmgNdQ7M7QSi0sokzsbYWXW2ps6m94lVw1xIlgHq67vap1y/1pjzBCzs+XAWn3kZvuqiLyh4gRIVir22uTi+37G2hdDIPIFE2TJsqYM0TceEcEk7ldVTjWTPStW
 VloNgx6vaO+8gpJVX95fuvrtohPv54YJGFXM61T4xUvR6qe6mbBid5jGEhEgA9KeJmoNsoKpCBdqdzd4Jc3sQZ55EEyG0waKzYT2YPNE467FAH1I2SPFyxSppmfRKkHjV+Mrwje2o+VEBLAgRjchABCIISESrBSIVPzmbk4ezXRhSNALcsULr4ubAMgOGNQNa2/DOt1c6acVVWeu4SI8K/UfAACga+G1uv706MRaY+oGmLt07Ve40IdX4yI1qay6VmiHsQ7ui69Xvof6ePnbXRQjuFFAQOZu6rujbatthQg5KazC3JkBorqgs3JYfSJQpW78Bew/lp5qsTZnXTPP32hiYisT0ZCiV1Xo/JVKC62wEyEAdwl3G5ZAJG2igACrI+rnIDilie6CF+jKcZj3Ro1N4NMxrr9OHVRcUaUdnHlDfgSweuubVOlAZziad1QdVJnsTiDOu6prrsp/8a6tlznW7lgHUDUPV5FVfeUUXV04vq/1l6eDCgnk4r71bjuh+p77QIUxbCofAPwQ2Pa493buM0ThjU4TSK9ZPwSANsRJxrolBrZMxjGGLZmstDhYlFteNQ3yjXDq3N4OkQi0uWzFIC5XAd79RjRBeT9dqSZa329wISrHkxV7XeRKsLzsZpqLuPtP1WiaD1UcwqGEHLH6SVYB1dl3KxNtldouxHesjGslIljLHrhv3Mx5hSYNXrXSF+9Sh1pNBAgVlNHJz7d2BdarMvOjAwCgq7Pc0XaukfMxLWM7Dc0Y0+TI04q7GlU7ALUbE4DQldVlhepiv15X2hGx3qizlS4YIzWHtKJPcl0Cr+UulJbWJ5q5lyM/p4Yr0DGHhopRfD/ANm/FyfNdcCzlm1u5q/U2+0m2ah3WvIXKlPDMVrXKIG7FFFiNDXqS9WlPp838B6w3x4p21TR0XhSsvrT+EnSuQtlpFSJH5EAARBoqVodXX+hqzcw4sKrOrvJg3XBVpzhDwt6W7DiAK1oCXAk8+SGtc
 TMAEYmqDxVMfdKwrsKdTVLZ4dY3NCZlBdlKy1paR6xoHWoq3BzPTFn7ilqtTRKn9SrFWnWHfMvgIl+B9wncEXXZVxMSHHVZOXptZOMHqwzsu2475+ewvzr6fA1WCsuOjFN4tUbAynS9ACnvwXhKqyPVf3+RUJ20Lxy8ckylD6zYvM1jBeJrAF2g0LIMVQDCWu02eYr2sqhg4Fy7ipaZayf6Y5xgrEIC1zXjqDvCcgMGKydWB6xOdKN0kBGgdm6MA7EZS+ZPNO4XVnks6xIBkC05XXGi7d1ddT06UTnh2pwhWfJ0QjJmADq9VXP+6yrWNMrXwmJFcUZq5q09x5kPXiN4saD/tzK1jOZf8f8qrJi7uuGqtK5v2avodBem+psa7mus409xDm/d+7FOur+i18v+zFq62rTvS1tVNchb+eiJAXFVesYFcpFU1/PVqxHU7Sn72c8KAGSrVrVRWlUSuebMgbeQ6vaMMSjc5LOpT+bTJOTQ4sRUHzQvAxehA49R9FekSuExqCpMnRFoNbKXOpqlHd4CuOBmOlOdKm/Dj6sbt1VAeCUF1vKs7uQMLNdARMYsN5ATXcUdVf2/u8KKIGtDgBVFrQ7NxdJPU88AlqFXzMqLp1eYISedSpWBo7bqfJfErqYWwAVD1pv7VHuZ08kreGfXkDXDmL+NWbTkmmjzKibMY7GPCEAMrRqtKQV7J6dZai3yxzicWHFr2yUbNTAFqEgucWBZqqYpVl6OwKA2+Rzuq3nsZ4G/O1q+9DyEANoPk9c1XjsjIFQrCMyksUq58nFq96zcoMq2Ii9Uj4aaiFYzPW4OuLmHYIN3hrMtj7tX1aj61VfE5DBR00O1H1d65tpbI3iXP3Etqt5ZavcGja7BvUbD5gQLxAopXmqreqreNKehicjHeKAKNiIAOb6qjR4BIIqa3lztK75C7oh1bq+pM4DKlkJXEvKK0VeLOjDjWtSq0+sHoHWqEQz0ybr5RizMpkur0fISJH
 dkHVTgRtTTrXtPzltGJw7bPquf7aT3g1ybLOCMbrJXMHRkqyJqp2rDWL4idpWqzBfeNSR/WSunFZRWCueC2eoU1MUTVoQPlnHcLPT5sIqGWDVHvYnpPYG6MKu2X3R5V039WsPJ1toCgY2Dej0HFVycTnLfoikWqfcWoKYyyY2aw79BgWuxuawfbj+UK/asuZsCDVb5gvcevI7Q9SF3bSOqFj35yFRlezhp+EY4uYM1RK1jUdnT5LtGADWZO6MC6sPjpoO3n4zxg+B8BqiKOhyuSVu4VVdHrLwmB0E3AyqsYjXwFt8rKc1KJhUOakPmD6op+BV68KKg2lkXbk1ElRzd5az/pPUFw7F2DPnTa9qmIkxwQ+ksnKorPgLgZr+TCQIAc0FFEK+6eBeYyePN3xMdD1XyWJ3TlaJ2ROeNB3S6vFrJ6kspbHIfyCwt8oyLzkqoId9cnrmWubkP1veqW07Wd6rG12YTmF/uUo2UBQdU2DEKyPbb9qVmjFVThhBqJdhkKrOp0hpA6GJS6JgCnbnm4bhSmuS5ocaaVB/kVbCuGp2VpF6Fuzf0/U/oxnDFKnCX9YFYkx/xP5gBrYjDOXHeHvBWTsUP9eSGE40DazWVrTpCFL65vkHsFbyaUarPXy8uN1fAfnthqiFAtc7QDr5vnGkOA7xwRs1yBfCAqheNmFt7oiIA5iI/tUOs9C1sa2e5X1eaCbU2vaJnKwHUPtcmEVVXIQACZMwMle1Q7farI+Mv6zFa1XTWKjw9bqi+DgEv5NBXm10P5l0k44u2WwWymmtfb6qbpWiVD67Axk1rL4NV2brEksGcFxiBcwhXNYGLJiMC1JwG31Xj6fjjV9EKQKSdarYH2JynP74mQavk/Ky3NYJ+7nvmswxVK5jytrR2PeeA9WXQxqm22fkKzmi9AQSnVmoRUqrE6iRCta5UMnIcDPWWGvlWysoitBZPcCxoLobOCwNEs1uBu5rTM26kV6jXt6Cuyv0QXtD
 vK15szZFftTL9l5Ukav6WVXHkvNzapPPjWHtfN0Jr4jMjiL7pVawJHAwNFIicU26vsDJ5bBfd5HJLPjzmvKVQV+JERKSNHL1KcwNPNYvrQm+shvdHEgGCttFL5x/VMOqrRTx32gahNxaMFFxAEaz/zQCYn97GIHIzAW0b3ayo1I2b/dWV0V2V3Kx3CqNyLiutbgYXXQS2ukI1qH40fUiaavfy08DwsSdCN8ArPOSIFi6+XtXydVr1JoTx2KpBraz5utKujeBFVK5onrpercbFCMWYs1Zr4kW95ia87aphHkCwa/otnfkyBwtQ5u21SklYFq1nUXxrPIUS2dhKzdbyU7Am31pNgI8Je+WOZOHlpoanIDQ/2Wag5S9mDE1HUWbiWkI17VxR9DVtgIZQWPWxzlXuSGcuufCi740RKlVaE2sNtgNjruLfO4cLyIT00NGM722N3V/hEg9716RVZf2K4qrmHmKVz3SvV/Vv/XVB718wlpxfRX5kXm2qlfwFB9S/BRsTcL5TnczNZVfsX/NXGA9BW4fXmHLerDbmgzGlK/PIX1o7oKK9v+dtu4DTfVztSN31czLWCGgKNwlWnA2oT/AKwU412C5hRf0rwnNmqFuLvKLWqU4GVFWSr2b/rPdDNW3kysN8gsHalq4S1BqxHtG2Zdqrb9+1uklQp6gap10A1Kp9UiM2x5SWuhyW6p7QhVOsUn6lDLQaKYDqQr499RGpCbyaX37+umnvRw9sl8n10hhj5Gax1zaeGZFZ3e0miOk9EGiqxo/ckLr8kLkc+WSqKVGprDqvx4jcR2N42LbXZOwNn8qgIbfiH/xccxkG1xgCIiSbx3fGluPv2pSw3GML+Wvz8yJZ+Ex0pTRqNoDlL0f4Fz1ia0qYA1cBRH4CVzrolevV/oPK0LDTG9D99GUp9S9lshqPejXoJeKPctPnFXSuaH+yd0H4khs5RHmrsmq/+b7SMx4b1jAwhqEG0tVMAvSmEgEY9S5s
 +xEAkEgTugSvAbC5AQJpS3Lmsyt7RABysaGqEAkA0OYjagrS6keobghAzLhsrj++BMMfi64prjyOnEfFfDoObHfdncmbMk6MjhGhkik4drkwPABQHVYzb8zNwXrXNZokr7PsjHVhaAAy2zq69qObnA43WFV4+lG3zhNhrba/dhBRbbLVSGTFE6pfz+uFeocuzlCopFGnbK+VfLXkyglOvFgNvVNZda/DKQy7N1g1xGjULOnaLCY3ZPYvc6rLmHBAREgayVl0hibde/dRW7+HnIOEgKTRagyjzmrDa/4lBAIN5O/nceHJ1azl8GLxzfSdNE10BqsLzlnxomsfeHQ6nxvQmL61YanQuUoOjmepJi/bWu3bZTdQQ388YlV7X4WpEG0tgTEVnJIy7XQ2ghWDd7NoFVXkzsRV5q4gBMAY88C6cMyXMN9/5oXO6fTvjWK4cIBv8IV/fQO8T+J0JPiou5uermukLXVZmK6YdkREpAVWBKZtNNNaPnbDLbIkWs2sFceWkECb5cg2SoeWVhj5CDgikptW6NzeWtfQB6zrc86ebaqkzSw2R0NNIVZIBANp9PUdXiSGcR3fO9FDNWntHIOV+zp2cFUEPqFZ1xYmt4AVKVJVlI/1YcKq2Ssr5qqoUoUrK8RatmPldZHsoTZGr77oYkLyP/eq3F9HmXVevHCwvxdjTGtdk+bqbLcdqc6xGs9OWfM1AGlwYncAsjAQlXNX67PxGMjxW4267a8X+1YVR5oiOiRwaWhzTbKmkRGXsynAarAqfl9lNbyPr62B6MwKBzg3uwlMagxsUt21xR5B9urmpu52nupq4rR2e/UZ7PwGBFczapvukr727qYozDIk+Ru7s+uRf2b50Q6CBza4eCrYyeQA7eysVzyY2rh4WNCrduorqL3gZlUm1n/B5kFXwfAKWFeuU2uDwV8Feqd6nEjrtsYFPNlOG+AJfw83HOj64F3WlXaQb65T7uDu5t9aB5YIEN1ax0pYR
 ITuf4AE2hp3hMhspM5ONa+pKx/IrSQhNzENrZmW+CVK2ql6oz7s8uhXyn5fHbkaFV3IirkRIjdCteHzDgB4ovUwtarK31Q7I9NJzImOagXOXl0a/sba5ITqVuD7Uj/Rf3Q2d2XJOBOsxotVrOG/+NKuDTUQ/5csCqrnzMjSkLZCtA1FdNuEW1E57nOWEBARCdAK3C5CRiGx6sp2fS05Brbt0y6uY+To4gz1lqJFp2tnFTfWnoXRF/LYZpvF42jNGPSDh4jgksCaKik4/8iBF4zjh8Dq0TcAQDT7770KTXJtqA+w7UG1pIGYLaomyyNQG1lcGY2aLVSjXqgUIDk8e15lUN3JN9j6XNWYr4DDHWbbUNUVOIvK//uqx4NV3Z1VIl/Cu5WAnMfpf/dBPa+OV+VpNZx1sqGqqGXMCrhCtuU0gx+HFjczAQBA6No5XozV8BIoZ1hUVGr1lzOnzEy2MKvW/1SD5jDqsOgCjE58bjaTE1dFUYioSXsM2oll/Vj0Q2VdJOdmVz2og6PqWPXOXsRbVDbTa4KmDvHAKhl5bK4UZa7MTfd/x6P1e7pgRk0uHjkeOm4CuJiMNVIc7KjWIW+AuS9r7rMRC658dBi6CKxqvC98U+t3bfRda10Gvn61Sjms/CE/92qsbXFRc3D8SURWxZOu8YJrvXFINKHPX/kZX8+21e1oo6S8biS7gJ385gLoM+4Ort62cSqwYiKn1ezmu5os3drHcph4PjMhUtMd26zKEndAMZKoPlWK6aJ+9EDxuoicejJjagREhHa5N9Z53k4MN5o1xnE/fhkJvEpC1cvVfLmTatErj3U3aq6/7nDvaNYogJxX4MDiW23TMPVz6yLyUvpyI9VpMts28AY6WIm4G1uGAoDarsrmFj4c5wNB5nvh5oibr34uVqzjw2PGHjJCQQDS2ulY1zr71k0yMl5eFTsyAQVA96ZqLAFjCGAel2FxAGBdHgsVUmaQ7MukZhCBMQC77WhN7l
 Ways/4C+7VhbEEWEVK3TS3Ss4vgXCArcw7gBUGdy4Ueu6vq1fn29aYg1ysEd1MA3D7/FTfVA2tq2T0zqXjBt/n+sUJPCtYE2+l81hxR/1enjRIVxu0ebl5G4msR1gz0y9afpbinAI26t+FDKsIFPnpBHbZcZ26jNTMKDKs5GAeplTjWqiAau0OvyDVwMrcQCmjmD2JuPwokSbyMkEAZfvpLoHM7UNid+Wyjq/W5lvGGCIjAoaGMO3mPtqqZGdqVO5dXffXMPHlH6l2rMUSVOJmfjZ4jqX6mVYtMqpRu6Nc9EJ0NoNtqnc78BWjfqVtVq1Wprj/FT0OXqFS27Ia8znlAwSEjDkt5OC7cpIfb3BNttxYCbbePHKNdicanVyFhowEELVNEFfnVHdEBCIBXBipIGOOqJzRgd70NjF0Aqv+DVdbY8BckYiYk5fjXSs9czuTDTdJ/yrLhCC1BkclWmvrWiETnHPBOWOCMc4YcqZBo1ZGFsgYETFGjHHQhNzOZmYqQ1dTl25ymIhgDXernsGFjysc7DjLO8jVWebAGlW6UXLvHI5th91Y+AHxs8gw6KujXqnyVcX5qkJ3N1nRCeiH9EsDogY32kUi0ZGoOx1qgK5+81h13Fy7v7ODoZo5rArKVcrd2Xg1S916jE7zAAhVKjsa9kwCk4i3ZgGRX7pOdrJ6Oge/vgpA24y+9X+UUjVvirS2q9nMUVpr86/WWmmSWhtLxQCUMRaGYRAEURhGQRAFQSg4h4BAkyzQApRzLoiIkJGt5ajK1bwAqllZCdjHny6Mk51etVH+Eu3nTqupafexUqsVBH1iE3zLLiDEfFXxtgdkjZc9MtDbEDUurJuMF3SCw0s9gfwlx1RMVwnrS14XVD84f+kC6D0xO4vDuNB+LrHaQ+h07ZJGnuQmhZkFKObzBQGYImQyiXUiItCkwdWAGvCBVdlau9JQTaS1VlprAiBSWkmlEUBrJZVWpEmT1koRaa2kUlIpg0J
 FpElLJUmR0koRKSIA0lJqgEacdJutfrfbbjTbjYRRzCAEhgSaVElaA2OccwLiEIBWZtg0EUPGnJTr3i6Ck5gdBled5ATqe+fH78IoO2u3JkkEb+pRzWq3I1LdF83WHZ7DLcOt6DSXU6lFcNwlPNjJq9SaQqysuaovuFJtWPObLvo9dSL/z70ukHfVPKfE607eRSuFmN8qjaxtYztqL0FeKCst9d0nIHF4eqqJSJPSSlkgWn6TSmvQWpOSysAINUklldYladBaAWmtldIGjUprTZq0UkoTkDKRLQKlNQNCBK0tzxkAmC3kydpwIKUsZamknAZhmWUcMQ4CGQQ60GQmBihd5KA1IKcwAGCIjBEn0pqYzUe7YUXPGZXzawMBUE/2XBiPCpjODiQCt9ae6JURssis2JQcbNxlPOGhHQJ7nDXnoX5B0ySsobbGlNW4OUPMVxf4eB9Ahc7K79b/GSS6pNqXqH1ErbU1TAl8s2n1YzVZX+FRqBSYUwhA7umL5KXh7KKaTJ0yNH6c+IvPPlOktdbSlN0ppbQyjCKJFGnUWluKNY9DJQJQiAjELVUh5wztw7IY5wwJOOchIuM8YAw5RwDBuODmxRAZ40wwzu3WJoyIpCxny2yZLTVQGMWIjDSVSinTJIZaS1mWWkrOOJBGAsZQK8aY2ye6St/UctneC4aawCqIVGWaflCdYe1iG17KLvtSw2ltABwTOEh66kBnETjpV1WjFg2OiN2Yr7QTwJWr1p2h2hnWliB3NUMzVvXri0bzRRhdSLsbO9Gk170cTHPqhilU0Ux78cr4uKCAPMOS9dWNsQgMzVbylqG1k8bKfBMvjl8CczAB5IiaMYYoAELGGBMUMMZQMM4Y45wFXDDOGWLAOGPczFHOOeOcMyY4t941Y/ZXxgLOOGMaUDDGjW/ODHsyhmD8dNJUynKZ57PFYplnSymBMQVEQEprpTSg1qSkLFVRaMaACDnnSmhOmohpAm5IBd3WobWc
 vNfm3vyq2z6vDJj7vsaL5ELCZljoyx2O2mesX9iF78AbBubZZORrWQzIqhv40h034RBXblG7tYWUb3HNVrnwpmraqnnjZ9wFB3/1xEp3V5ijCtNVBM3WuGF1M3Q6xFbPMXc3++1FjVO7r7hz5ZrhM4aMIWOccWSMc8ZQMAbIiSFnDJEJhgyZ4Cg4BxcEMgJx33BEYAZ8BnsI6DeEcfEiRHNBZIiu3Aq01qWUQZgJEbClKOczBQSaSGkg0kohggZSUpZloRkHRC4DCjRpTVoR48Ye0mi8Om8K2j92eBkD68lZ4dVs1FcGcXVcycMCCejL3eGaWYn+LuDcT7COl0v2VosPnS9sdzJatRTrd8cauMz3nv/cBHIHuxZdPGPloyM/589e6FCVWa31EWyEa0UCDmtE3sjyxIsmaoPIjD4yc89mIaqQ5Uoc37dU3L5+gwA5IgCabClniGiZzizfMTFH5vddAIM931rkzGysY5G5kj90tQ5+3yKDTdd4sIar1sBYqSQXQgguEECTIq1ISyk554wxjVqRKqXUqBFRBaVSkmsbuTKz1xqZlUXoYt0AAOZhzL6itmZMeRtA/2ez0s7YW40xoR9cIlctZe4DaJW7T4IQ82ZFzfkwDXPjqX1hg8U6OUlWBkTVYIsEK9tawyqGBHv9L+P7evyoupG/qZ9Rq1rF0179ItafN5dy85Tqixc1gLaocUOENhpei4DU5oIhXjHoDmx00LAdmKQ+GvZjCIzZ+mDmaceZHxo0kOFK8M5P3QV2g+aF5QrJKj0LyDiBZgDEmOCcM64ANAERlaXUoTIxKamUZqQJlFIEWnCujXlKittZa29V1Wq6+2o7kljjpdVaAagNTDX0F1fzAFj146jXkl39flj3GzQhQxeItvnZuifrTMlajKC6mbtPjVNXScvGfSpqdwL31ii5ca3FLC4SKq7gr24ZrDRnxU4186xmkddVv+diJ0DzzpxONV7zt7GGgTvTm14EACJNG
 +TN3hW7yTGmITxrL5uLaCMTRjasiojA7EN/q+fOVdMCAVETWE/bJZycyQyaMaUUU8r6UMA0oo1REZlgFjA00VklpQaUQkitAyLQJsRkfR2sZaidbVPPeRos262ia4NtRV8HQT1iYgiSwC/EsMTnj6xsxBqNoRegCyMgoDeOLbNa/qxOdjTvFLSBGtR64gas1pTarb19XKnvFV6qRhhr+W+HVLTWkY9ouQmwMgeqnK6bRN6aMrVvnuEr46DiAMeZq1wMdevLhxcED5gpF3GGA6GjT6uyjUK0T1a1IvZPtNBam0chIkNknBt2dKG4+hYPhIxbBwa9dDSR0hq15pxLw9wAJnVEJJXDCJjnfJLW1h5FZUxPE2PV5GRBBLbFQCuKnhm9gs4Q+LJXHWc+vIc1vYHgRg9tssN71nWCcQOBhCiJSqXnpSIgqfSoKENr7iNn0A1EzFkoOBg5eHuUIfjgqysaomowTU7O+kW+M/WQ5EVQojdzbWFEld+yqtweq0kjOi+whuVagMny3MrtKiaqGyy2yhGs0jWttRh0ee4LM6ZSTGYmCIYINoJIYPdpRLNFqDMWV0pdjOCYQx4xxhCJMWRMIPpKOMM4HBxKLHAMl7jhRBeCI1JEzMSLrIvFCMHYpkor52QgIhKCVlqZ8KvWSilhph6ZJ0BY9XNhlBxp2VX2tW8qfq0FvK0qv2DZ+fY6FeDlXrsZESCWAOeZPCvkSVZIpV8ucwYwLItny+z1VmstDJ4tFuOs3EjChMHlNHq73xrEETP13eiKtrxt6ywQbxxo10nH5Gj+rOhlwzfgZ5EjtJVwmGu6m5+e9cnpzBqyqPK4qxiq41Gyyr5S3ZWfamdD9Wgh8MWu9taVavelw0RAJAQPHMd7i4V57kRE6zbUuJe7xI+ZWWicJEQAE2JyE4wUQ9SARJoZHBO5XZerdhs4Kq2ZQSgiMiYELxhqK3NGWiulgKEpvzD5KSWVVgrIZk7N0xlNAQqSsTeo7t
 PolUyF92+8PVl5AKb+1wxUXdlVgvfoQKdSwT5kWxOM8vJgoX4xXe7n5Wtp/GJZ/tHB2d+8vMMAn40mCpdf6YZnRfHT4fCG7r3RiP74ZPiz02ES8K+vde902pHggd2f0sKKKhihtTOqSWOa7cDg2mg1qOuy01dud0jfo5ozvmKlIK5eCirU1V5klXJl9PgfrDFji9uwbhzXo1D+3qyGh4pEAYTZU9Bk261zZOjT23FY2UJeZbKqQtR776ZPtrWmDWRntxWDcTYrNVrzlxmiAkCzAYrRAwRaa6k12YpVICDiDBC1Mll8pU0+n+z/TaQLyC8M8ZrL7Thfk2yd+KoPq06r0QMXvqyMeDMBEM6yMlMkNQHC0bJ4upTExJ+P57OiuJWmz5fFrVYzFfywoGku201xuZFKUqdK306SqVInhA0hNNGnw/lZrkNEjriRRruNOBGstuUuOKfMzpyVgIVtk1epNbeGISIoRUopzrkpaLzocrkhAa8bV3/03zqL3JUXrMz66joGZGgnjtM4hndr6HQ2qzuiNmbmhsLoOO2SMM7kQl+fx40NxGqVLAxtOIQ8PNEiEam2PR6iCSK4XtSMPMttruFgzWAExhhDBiYuD9oSu5tohps1aa2Mc6Q1aCCNNZbxD8y02sr/BRfPcErJDzm4GpsVNPgxc+LzysuVYEMm6dks35tn+8v8+TK/mcYa8EVRPsqKrSD42qB7fzrth0E/bq7F0X62vNPpXG00/uRsuFB0NW48Xi4Zsq93B52Ac9BPZvMfHJy/2UhezmdLpW42G391e3Cr2+LMrMujWqOdJ1U1zprH9aYSQFEWk8n45Ox0Op0mSYLILu9eardadeuxQsQF9e2MGPQ/ei70hFdX4vaCPodXv6iuyc+TKTqTtzr9glkmLNUwZ1YhmuyQUwJ+dMERKJrV6wx8N4CIbLmxScjVXDbmGlRvMtSnSQVy5meI+wxSKqmUIs21tqkBBBOd12TqWWoPEAULP+f
 LO6+0GkbyITHw2sTGSiuLvVLfgC6GVJ2liealmhZqoek4108X+cez+b3Dk6+s9UeKSlWOpD6dTIjzb3aai4D/fDonhC+U/MHx+Vf63QDhZZZ1gvDr/cFhtvjXe/tNxt/ptglxK0kvp8nxYnmz2eQIZ1n+o9PJ80JLwMtJeKMRBojCZE2AGDIC5986Kw2cp6K0yuez8dnRdDKcj89OTo7LUgVJIhiWs9Mbd95tNdvWG6gpblzVHjXYVuq/ApeNDVUAtiIj8FEj8H6pw/HKZKjF/ZzkLZz8FBAm2ehs4Qo3zHozoLX1ujz5+1VfjKFG4D6rROgvYAK0WgMwby7bhyvWnWXTDJsaJRKMG4s2YJyAFJk6FucEASEyYIwIbBWVUkbFO1CRBmKakNlyXnebCl7WiDRSXrHNV1Q3+GnphpAQl1KNc3mwKM6LcizpSV40uXhRSAHwm1cvZYg/G082wnCsqRFFqeDHUj8r5PU03Y7CoywPpLqWJNea6UFRTkspEa63Wv/w+lVQaiELjTjRuBbHaUSfnJ0d5vKtXvtUqqcno4NSDkT4rU6cCNZLEs44J1VqnQq2mYTdgHNnr2uCIs/zxXg2PJ6Mz8siAxbEcXrl6g2ttdIglUSZjw+fNa7fFSKoOMKPBlgoYW0hHq0ypEEsWTbSK9cgiz2H8HqMz4q5fpzHpfeuagreFCw7T8H+4w0rj24XtvOmmLml1lozEzdyKqHiWyJyQVoX6vMa6cKLuTCK/cgYmBQooiyVUorsniXGLjXsTsYGVUq5v1KEIeeccVZdx6ooqpQ7IFAtC14Tra1Xssay5RJNOJf6PC+Hhcylmin9YpFnSsecPVpkzxbLd9qtsZR3koRINxnrBPyoLG7G8d1BNyf6s9PhdhJfTZMux+Ns+cs76w2OPz073w6CS1Hw+Xg0kvp2q7WdJjpb/M9P9jbSxq2WjgVrJqkupy8X+bv9LmbZ3Wb6YDJ7kKnNNJAlU6DH2fLPjk97
 Ufhmu7mbhJtJ0Ap4isCWk3xyfHz4YjqbIxNRnMg8V1KJMMrzXBPkpQwD0dVKFpkBKFWYpAofiLWUhFd4VSmfobP6BIcqfIL2gZzWACCP75XsSC18BhVtgz/AkKbQCOgD12h4DsF6sgZAFdTJcgmgjwZXNmulBtBW56DddsLYqG7WVOkytLF3v1WYMRy5iyAoIKW1sVBtbIvZ+aC1zmVJi3khNVssgY04E4ssWywzpZSWstlu7Wxt97vddqtFYAIz5KkTELkLJeZKj7JyWsqlVO0wGJdKk95IotNc/fTkfJiXW40kZmwk1X5e5poEw6KUSymjMPx0PDs6PV+7uvtut3W0XA4Q16MgQfpkPOmF0XYcxYi/82Tvdr+znSbdMPxoMmkwvp4mcyn3hmNVyLjbTYVYRsnNgD85PN6N49NMjYpiLU4I8c9Hs0BTP2ncHgx++/GzQZz+rZtX8rL4i+mCR/GlNPro+OQjxot8eVvQr241OpBPxpOSRKc7KJcLqSQRLPNseHiU5wVyzrhotdt5USpdZ74Vf4nsGFUj4o+oRxRW8+ZW9TtF5wvoTa2f110VF1vDb8Utq13QfS/cvr8e4PalzdIf+zUCaO0XalS8XNnYzPkSFT2hjUFgbZcRtmIQg6kX8QEG43wwxjmwqu5Fa9sYIgSbONCkkUgr/dkXnz978kwpmeXlcpmVZQmAUqoSIGk01zc2vv7BB++89dZg0I9EoAlM6RYBHGby49FiUcqCKNT6YLmcSxkxcaR0qakjxHFe7u29vLo5SJLk0TJ7NptvxmEvDDnnL2QZhdFQqs2Qp+s9xvl/Op+cTGedZvP9XvuHwzHXcq4UAd+bTlRRrsXJN9b696ezo2V+pdFgjA2i8Eaa/Nns7Acn56+1Gg/my93B+neuXCatn0xnqMVGI2VIf/Ly6JsbG3d77YSx0e7Wz84nT+fLUqujXA7K5R//xV8U52fddvKd3f6vv7aN+WiumAiTBggpS8nEY
 rGYTmdZXsyzPF8uAUBpKKTsdTobeU5Nb15axemw4j0IN96v6D6bgrH8WLcNLQq9f+KDfQiepGq4ruH1IjqJEEB4uxacpqOac2R6YJxDi1MA+8B2RN8Td4qzPQHcNPLTjlbSW1iFwBnachPS2t4XgTPGEKV57hECEGmluOAAwETAkJli3TSKv/bee99+/71QYLZcaFmGYaiByrw4Pj394uHTn33x5J88evwf/uAPdi/vrvX7nW43iRMWp0eK/8nZ/GQpsdNm3c41rXdb6SBOzsvys8kEcsmD6Ea78e3XbxZaP1vKe0/2W3H8lUu7PxsOl3lZar0s85SxD7bXP5tM9ubLsdK/sr15UpY/n8wWSv/1zY1mwP+XJ/s7ofilqzdPsuLH55OlUkEYbUfBh+ej02X+br+FAf/sxcH6jau/tt7544PTf3F4+pev7WrGQ17+dDj8eq97o9V4vszeKPX96fgP7+3dubqNoH/2+PFkPNHD03gyvMKKv37r8ru3rz1/8UJK2eoMgjAOWcCV0oBFUY6n89FkEogAkGnSSqv5bCGVRO72jXMPr6+zoBkIH4r34ZaqsMIhEcGFgVfY0PmfgATKfOswfbFsqrJXa34/uYCa8EBHRKNAnaGADswO38ZZcfmsGt5NyNcarc7WrNL/jke9rWubYeLhPh1qVAZHxl31HwEoTcr4Ssb+AEacE0NyMVBEFjbbrThWOJxOJxzCVqfNGe+t7bz/3ge/cvjyP/zHP/mLB3ufjsZRFERJHAjOkJEQDalDScsghptv7DX6nz8tO+uD1zd7b7Waw0Qj4wutn07mo6xAwTe31iLGPz4ZZlINWo290SgbT1+/ujspJSPsR2JbiJOiGEupFf2d7bV7kzkCAuc3eu1lkf/7oxOYL5uE37117c1ee/Li8NNHz9/ovnW91VwsM6XVsNC3e53z5bIdCEySHKh4cTxpNG/2eg8nsz8+OT04n6go+s7Wxh/d++Lw44+bctmnrF
 ku/8Gvfvfa7ubRyZkIEuRqMpk2GzoIQy7C9a1eb1DOFvnp2VmZZY1GEnFEwlmWl2UJvmbF/TFFyh6aKwxnQ0s1a69W8kI2a+n50cfIvQvv4vn2GBeKX3UD3I0qdJF/ErZN4tjISh3f4AKUAJ7MwNsnKzwKjvap7sVVN3fUT7WPAE7BGA1vU53IGHKGDEmTVtrsUmviSSYWZbP4WmtSSslCU5o2tNbHpyej4ZCIAPlsWfbXtv63/5u/8Xe+/0EvjXSpymVeFlKWCpVKApZyakzOwp/8UfqLHzQoG8/mP/3wi5PzaYzseLGcKV0w1mmlJVDAg1Gp749nu61mNwzm86zf6dxstRZSf3Z8dr4selH8cDh9r9O+0WpMNTyYzv7wky+y83FPhM20MRCiuP/gW1uD1zvNo6zsJMm3blxZSyJkvMgKPZoQgx8dnrJF/vx0xBgfxLGIo8eTGUh9cD7++fODnX7n8lrrf713//H9BxvL0eXl6QZkf/d7X799/dLZcEzIC6lyqaXWIk7iRrvV7THGo7R56dKltUFfkZZlQUBKKhHwOEnSpGkItD5YdVDWIy1+lGsaGS06/UhaU8GHP8wPzluqxanI+TIWMzVAmFONKWlDjT7s6ljScJsxSohcqY25LlttguvgCuzq8wEATMUeVZEsHzNykclVZWGSSQatZqWUTxeZ85BzYEiuyomIiLQGCOLm1sZW2mwWZTkdj4UIGs02D2ISjV/61rf/q19+v9+OC6nyLFdK5YssL0ogEEEQhEEyPmn/7E/ig6da8P3ZfCHlWhTebSRrUYCE+eHx8Wiy00rbafysKJ8u8tuXtjd6ncfzvET+j+7e2mq3fvj0YLfRuN5uX0+Te8PJ+eno9uXd/8PX3w04+3S6yB8+SvNlnjYbYfB4vvizFydXO+1C6d1mM8iyX+y9RGDv9zvnWfbnL4+HpfqT/WPJgv3R7F/9xedBFP/GzvpXOw18+fzwP/77wd5nV2LVbkZ
 /6f23X7t5czKZZ3mZlTJJG51Od2NrhwXhIl8OpxOplSyy5XIZxvGg12EcNcE8K0ql02abB6FFnTPT6tyJF1iqBlX71423c3TAsaP5T5N9KqXVzn6sL+LjwsVdXZVDPQiD+9r/apaEK10jUpaZHeTRwdZAyVycwFbemeXv6PS7bZq9FgBW6+YAwCxPNv4+AXCzWIQxk3GVSiljJBCgJptFsLaBJtKgyGa9GCadQbcoRsPzuNWJWx3ORZnnRCgl3X3znUYj/eGH9754fjJfLAVjXKowChWRJCBkLFt2P/qhvvra6Pqbe6XaXOs90jCV6mw4ub6zs9lMZ0WxjthP4stpdH883R/PfvXapWEhrzTSPzs+H8Qhj6OhlM0o/Pzo5FYzvbOzmRBcbiTHUuk0vvz+ezHC6WKpBH+a56+V8nQyHc6z9NKV0enZv77/bDMJi15/VJb//smLv3p1a7GYH7caj6dxJ4nf3N0a51mM1B4eru0MGo1oPYq/8vbbZ+enWlEUJ8V8UZRqs7++WCzCMMnL5Xw6ZWQWIOSqyIi04EJrUlqv9bob69uMcY8LeqWi2dNqhVTnTtiojVvg4egTKgL0lmC1SLFyVKy3tMpnHrXW4fLeC5FA5ySZiCZaRV7FjKByyVfAjz5XSWSr5SurwrXbTSi0hgn5yaSJfATU+PLovHizqQgjsLuEABn+RCBuwlCMEUMCWwSq7R6inIfRzuUba+vbwBgQlHmGZu8dFhBrXL319s7ulSdPHv7gF/efHg6JSAOVpJVSoDUxZEyw5w/ak9Hk6t3DNKFlWU7nDOnF2SibzDe2N2Yo1gi+GE6vd9v3xvOzvHw+nl5Pwm6aTjQ0EQ/mixKI0uTxIlsMJ3/88Nn1tf5RUc46a41AnGbLH7w4FEm8EQanZdlJ40+OT5iOwjR9Y2Pweqf1cDT8yXiaPTscb/S/c2nn/mgssvmTZ8//jFQrTXZvvrb4eLPTFDHAW6/dnExGRVlGcTNM
 mlu7V4MgQsYG61ukdZ4tVWegSZ+dHIVhNBgMnj2dSSlLSbnUrUYzbTTBaS1HiBVisF5O6iHr/1S5X/LHw0pxI9aQYpx3unjNV10lspVKzgy1VxZmkZnZVttSofV2NBGzsSJvWyKYQiY7Xep+FIDzlMB13J7J7PNAEMyWoOjMl5olXrMJqgirJgIyRSGkAbjLNzKw6U4FpJQ0G0Bo0JwxHoSNIFCl1EpiEAAAaaW0lnEhyzII0zff7W3vXPr43sM//cX9WVYKzoHZjf+AQDEmhsfd+bhYH3Ru3u5t9I6VHp+NFkl4mhdX0+gP9g+CIDgATNP0pCieFvn/5+mLnAevt9KdJJqXRab18nR4ab2fav2VW5f/zcf3ck3fv3ntvbXeVhp3AvE7v/tHrV6/tTb41lpv/+x8/9GTN7a33u20fvL85aX1XqIndH58NtqEnfX1JAl2d+/tHd7qdw7m8x+fnq+laSuCdhRtrq89e/6Mi2iwllKWvZwvNrZ20iRhnCMXiFiWeSnLOE0n08lskbMglEVRSinCYPfq1bTRrFtZjlIqrBBUQEQHniqcVOGsFj4iAl+D5QHnrghYjXi9ntmPexW0XHXRBAOosoA26WPylszkOc0P2mhZ53FXTOlor7KCtc3jOyOaTOBCE3AG9ikNjkqNLLwv7xIRIJAxxjQAkS5Lu+rDGMIMGXAOQGSynVqZzLwpSWYMgyAMw6gsC600ApRlQUoiY4wzpbFUvNW/9Je+uXHtypXf+dOfPD04C8JQSqlLCWADEIKx2dFxfOX6r1zZ/pOjs5dPnl+6vPsP7t4aBLwTit/9wY+//s2v7AXil9d6XY6//Xs/uHn71q/evLSZROfLbKzkj5I444It80QE5csDJnV+5fLpMr/daYYigDC8tbN5rRn/7pP9NIoHzVQge3e992I2/539I5EtMUlUWeZET6bzF1mOefbpwcmdy9v9gxdqMh0TXt3aIeRKY7/bHQzWCHkUJ0naC
 IKAALLlYjI+f/r0yXA2LfPl/t7L44PDIs8E50zwt95//627byPjX+I3rGh58t84twbrP7n0v/PcV71vMFrRee7GOoSV+tHavaqleStzxrpDfomE+c8kkJSDhMsTAXM7Z6IDZtU4e4/6dYlIawI3cQAdRRH4HXZqxGlX5ZkFyWaxMoIpgzfAJLNRjrkqAuOAqLV2G+5ocy3SWkolpVREKIQCKlSptVKIUmsNCEEYJg1AXkjc2bn8N7//S2/cugKchWEYhIFdGsiYGGyEb7x5OJysJfFkNIHz07VG/OB8kkt9KQxZXo4KdX50KoT46b0nSZ595/rOw/Hsf7n3LOIs04BRPJ/M51n2yWiqW+13b9/49u7G4XT+py9PPj6bRIO1uNe7M+iCkh8+3jt/+ODl4ctPx4uvbq31QWWHR+XLl51G+nKZX+40Xy7meZErhi/Hs9cuXc2v3N4b50sl86I4GQ73j49m2ZILxjiTspiOh2cnh/svnh+eHi/L/PHjJx/df3h0epoXRVGUCuDt99771b/y6420WaGz5q1QxZAmSISkfeVXVTHngt6Ol6C2kIicQQguUF5zhWHVzLW39HrfO2fevdEE1kmy8we9c1Zz4oyuJ22YFarl9cyayWQdcFvN5EnQGwcOmQjKWZwVrP288QFWH19AxhHNZk4m1VnNVcZN36T26XhZFEXAAwh80Ax4GE3nhVaSEZVFXpRlqbUQIgwEY+kyW66tbf+176azP/iTvWeHTAiBTGU5aJLd/vuba6WCvWUxB7z19a/95ntv/mj/6CAvf+/B81/6+vvv7qwLwf7s+Pzw5OybX33/g8u7+5Ppp0enaRz/9MMvysn8O3euPpjOPn70vNVIw36/1HSp1/7dR3vroeg1k79398bz4eRXblyJovAXxYTC+POz4U/PR7fbHX0nGvW7T8az5+WLcDQuk1heuvrkxREFZ4Moij/49otO74vpYePk5PHJSXl4MCuKa5evdjq9KA
 gF50UhgfGk0QrTZtruPnn6ZO/Z86NFxqNo48135O23KYjspgw2W2O1qkeJ2yTuIpDQ4bHyeIynYcfx1bOoslj9BHCQAq8w60Reu4r3rEVNv3vYoAM3ObSRS1qBrUElAPug4iq+5U60H8nRpJ2E7gD9ZR4cY0wp5cwHZMg45yCYLv22ZJaABWOMM20iZHZ/KEVEoHUhS8pRA0AQBoFggne7fQIqsyyKYqVkWRQSSGlFHCLOVbZoN5pvXrt8eHSuSsU5E0JEb73f++Z3Z8ienZ0/G03V6fHVW7cSxiTn/+r+0ybAd67unGT5jw5Op5NJ7+zlla++9YuT8zu9VtpI/69/+vO21v/Hb78fCnYljf/li8PtOPpLV3YDoJv9zheT+Z+dnK8R/+nJ6IeP945Ozr/91u32G+88f/T09NHTq/1eS2nJeLC+dnw8/MrG2no3/eHpmItoXGSUZTwKW7NpHqSfdbaXhyezklGmP/zi0eP9l+04EVHEBUdNQoggCKRSs+l8OZ/P5gsVis6tOweNtecvT+9ubtzptokqOxIvQAds0bNTxwR2IaTVZl6Tm/p861v4wn9vC1TDXvtYMyQu+PJOnVb+u6Er4VyiFXSu4tU8/tXu+Wkeu+J8M9srH0+rB9LQl8LXwhb2mNXb1Btt2Q/ttiO+np501VOzSZgmUqQ1kSJSWmktzaxSZZkTLAo5lZRGQRqIRpxEYSiVIq2zMl/mWV4Wi2w5G48GrealS7vRR18sdC6JSsQrN26/s7a2HgW/V6ifffrZ37515Vt3rs/y4loz+eG98d2ttTQMbkbBt9Z7P1jMkktXbm5v/+zobA30tVbj58PJBsCTefafHjzd6nfY1StPXhz/84/vh+2mPDvvdzu3+90XPNBad5upSkKlJGXZdn/wl1+7EnP+dq/5//rki48++jxsd7qN9O1ea6PX+6Nf/OLlF/f0u+9/9fqVGxH7BZMPT+U9GRXNHb7VvDzo6pP9h88e6dm
 csmXQ7QftTsCQFblOGjpIZKK6V2+xze3ifHI0nf+Hx88Hb9weRKHN3VUJnosq2EdtnBFKDn3eUvS1oeji7WiTiXXMEUF9/Z0lvi+p5/8S3iISjrpNjGllJXj9Kmj2fbJGJHFCRNRoV4m4xaquYS7HQKuzBJz6tu/rFnFVr4cm1RkwxhkDxkyg3tZ6AjBAYynqUmqpSVkzlDQpIk5EQEqrZ4viRU4fjo6+1e8ca70WikEkEo4NHgQRB025VjNZtIke7b/Ms0Igiig+275+XyT3Hr+4FIpmp1X0Nvfj1h8enH3+4vDNy1vrN6/96MXx/XvP9Hj62nqvc/Pm3icP/vjpizIQv/34RdBItzrNNrDradR/6+aPXh6dHw05gztrnTc2+3+ky0ir6dHJV9fX/vLO2t1OQwL93//t758+2fvub/x6X4jf+/mn7//lb8Jsyc5O4/7g4XD06U9+/o9/4/stpSGKQtAfH5+Fm4PF6dns8/saUW/vKKX3gkZvsD3OKNrlMeK4VFkpRZr0moku5HyZdS9vnyslCorb7XLvxdPp4hcn59+9tMVtTBG8H25QZiOXThGjz3xXbpLRcuTYrgYRYyNcMOEMImrq/kvReeHl4gAgvCdk5kTd1PCAM61htr3Vii1m9+02WsEbl4gAJururQUfob3QCB9MZYhS+4fQGLGgN8G1RSFppXnAiXHidstvk+w0u0WadZ4AUEq1LPWS2EjqAtnPRrMNwX56OryWxO+t9Q6L8koieo2dU2qPqMiam+9/99ep29dJ40qURCI4ni3+4vlBvMh22o12EAgOT8+Gs9PzvNv9+2/d7At2rxUfzxeT4fh779zeTqK73VbIdh/PF//0j3/y1TduMYBbzcZrb9w6/K3fPz487dy8cqvV3H3j1j/9wQ+fffT5L/+jv3+yyH/nsyd/9yuv77aT4WL81Uubd/utjzb7f3R0PnyxH4fRP/zG+yWp//Hx4//13uOT8yEfrH/3+uX/
 9PnDPwQaPHkM52fq/Q+SjTX56Gk/ChcZUiFnaTvrdZL5rDw4WQQJRK0tmg9Ph0W3y0s1PDjevHa50+0eHpz8OE06YfD+xqAGn5ptZkfcfqtJGVPO6l+LHLt7j1PmGvz41gKcFWdhnUJrnEU2RLpySnUWgHsON/qZQC6uUKc6e1Gbf3W7c9U4b+VQcCit2b/abeC7Yu448mbVe+IMGWMcmbYhU1TalYuA2amXkDFiTCOQtIs7takc1ZqASGsk6gbIQA9CETJ8Iw2/PuhsCEFF8e+Oz3/rowfPxvlBRvtK/D8fj0927m5+8K0fp4PPWPy9zfVf3Rj8tWs7t0JBp8O/fm3ne7sbf2Nn/VvXdl6enuHZ2UYgbrebVxrptSSmh4/6ef52v/One0eHk/lbvU633fz86GxG9O8e7k2z4u7Na6zZ3Gk1vhhOf3hwdnNnKx6fqPm8FQVxt/l/+fnnj18eX711c1jKw1l2Y2vj9x48Pn3wRSvkDycLhvyDOzc//dnPijDavX6tH0bdRhw8ubd8+UL1ByyMmUaxs71/eJyfDyGKIc+ns6wcrHciwc+HmaIiiUPBTk7PdbsVhsHJ+ShupE2G2WL5o+cvJ3nhuIB5tDhWQmujGV+35lEAEIB23jmRSZsjOq7FOor8WPuPGsgZDB4EUB3nj6zhiLkIwCocaxetx49s8BI8z5nWo9bVkmp65ZZQY3W7i1BNibglm7QaRmCCm+0btNa6MFtCutYgIGPM6HRt/CSllCliMUvsEQTovuAfdNtzRU3OTws5HU42onAjivTp8OnB8Rvdxgfr3WA8+fMPP8ey+DubvQ86zX/6F18cLbPrafLffuu9S5e2/ulPP/39Fyf3zydf3dncefv1MghGhQwYy5ReRDEUy2eHxzd67buDzr85PD3Pyw+uX7qxvX6rlV4dtH9wcJakabvdRsCvbHTbcfCLpeSt5uPHTxDZB+u9FodZEG698VYnEA9H0504utrr5m99Z
 bl1WYH+83uPu2F89a03eLMzHo1+609/ErQ7jIvpxra6dpNzoYgKWYrTo+z0PBusi80NkGq+zKnTjjiGAStns7DfbbQaAaLotAVgpmHz0hZjLOS81PoCHlbYyBR5uPo7qP1nQeyeweUvYQOT3uV3cNK1KgtvIzjPujIQ6hjDGvz8PgYrYVj7lVuupMEsbwezkzdzS+bdXKP62lhyzYSay1WDuw2VoSvy8wUo5ktmQ7LAGeOME2Kp/SI552khklnbSaSUIqXIxURNcNRs6LyUshkEy7Jci4JRIZ8uSwqj97vN6NqlPaV+cjJaD8NvvXY1L+XhPHuz29puJE9OR/uzzOzs/+2dDRTiu9uDRIhJXkrGkzReiwJFdK2RnM/menQ2fnz/3z15OVb6H9zYPc/ynz0/EFm2P8/u9DrdQPzw6Yu3ttcJ4MFoPpplc2D0ze99zJOPT4f3J/MsCPibb/z4fPI//OLeh8PpvclsbzjS62tTJf/oT3706cPHv/vDn015fE44kTrttN5O02+8/0H/8jU4OysOXs4fPFbjGUax7K2p6aIoqNWIwpPjJbH48m5jNhnPFhPk0Ou24jAWotVrM6XGy4IYC8PggsazKMMKhmbuu81T/ZA6Jl3Vy+Doxptz/lcXQAfvalV4ALdxQQ0bmgg1OSIkYU09gytnEtdhbWKV9V1KqFYsooEYcG9rugZVvFiDOxprw+zuQGYD35oZ6tHsthdlHJHbBeCklXI7PwMCQ8Y0gEkjKa0lUWgr87RSSmp9nqmXpWA8OMrKlLMn84w3kmYUbiXxRr9zI4mutxp708Wnhf7gK2++M+g8HM22mumVmL34/PPi8ub+bLkWBn/tzpUfHp3/+uWtJC/WABln24342Xj2jd11CSR/5ftqOFwq1Q+DTiAixGBjcF/Rdl7uTedzgGBtcIrQEOxfPno+kXK5zMIo/vruhiL66qCzPzx/+vBJp9//W3evbSfxOM+CNEpGIxWKxt3Xfv
 3aztPheKrViIoG8J3r14so3Iyjm7duHokAGbvUbEqVv5zO2OmIQGLajHudxXRWnJ6VRyetyztJuz1bLEe5ZL2WOB8WgGvNdO/F4fpr1zXRIi/7UbjKSMYvd0Co+ezmLQJUxW0+mujRWSMjuPhaiZK6uVABwwcuAXws3T70lXlG9VPBX9/qc7soA2yWvE6HtVv6r7WbRjWvfsUA0P6JzbXr1IIJAGbzXUQT6zSekKkrZEDc1tvbpW9aa2kwqZRWsijL+WIxXMwl0W4ScaBBHHHGp1pfX+vtpDEHWA8DTrop+EkhoSx/fbMfMDzI8i/Gs4OiPC3yAPF6u1EofavT/tpG75Pz8c128x/eusw2N39v7/jH55P/8ZNHEeNfv3YVrl7/0dl4f5H9Pz57/J8OzkINnTQ+zPI7vXagFQ0n393dvNJM3lzr/sqV7a2ylM/3N8KwLGWWFXe7bXj4sByNOeK/f7T3clnkivD8nB8c9ZrpRhB8Z3tjlGf8yePZeDSaL/74/sNPDk9CxtT5SC6yNhNfu3ypV+QciXZ3eKez1u50Ww3otIMb13ivv7G5HirZioM4DHcGPcqzRVk2Qw5Ex9PFy9lcO78CaqNvi9PIu0CWusyxhggv2G810iX/zYUIY3V87V42OGWVradxz9JmxD2KvYdUXQ6rq6FJ5+g60SOAezpRlYKqgkc1WvXtcwFfwkoidur54xCRM/MIGrMdmV0IL5Wy9a8EnDEE0EorqVSpZFkWZbnMs8lkfDI8O11kPIi7YVCSjhljyP7qRv+vb6/1w0AA/M2dtW+v9wFgLQq+sdE/LMphXt7ttbZD8UvvvrW/feXfPNr7ydnkrChOi/JsWfzgbPI/PdzvpBGQ/snJsBEGC8Y/PDpLSV8PxC+vd9/tt7+10f+DDz8Tx6e/sbv+w8d7y/Hk1y9vlk+e5tP5J2fTm83GdhB8cOvqZDwr8+J2p/Hw+GwqFaSN33jn7tfWe4M0+rdfPM7ORpm
 Cb9y5+c217j//6N5plpcoFBfrg36nmawF7On58NHey7VGozsanp0cP11k0dkZFAUuFtn5+dl4mknFsiwHGGfZQaaS9XVotobz7GWYXrt9c8nErZvX1pJwt5X0IqPlSVtr0ngw2i7zBk2g3ZJij0BFpDwF+kH1IKu7H1jzklcIjZzbpZ3i95lCR0/kaqAIiDlr44KfZC5H1RsCZvi0Os6jHLyn5VEI3qZkrB5YRbtDpMcw1qEJlW1qK+cRmV1YrEkTSU3SuvPMhKC01lLJLM/PxqMnBy8/fnjv40cPh7liIiyJSsJc67lSD2aLmZS/fTw8XGQbcTSS+ufDWQDw7qDz09Px77w8+1dPD59N51/ttb/a75xr/ZPhBJXeisO3+u3//vbu+73GP/tPf/FeFPyf3n3tb1za+Md3rkxL+X/7F/9OHx0nnO8NJzf7HQj47d31QRzuNqL/90f3//hwOMmzZVG8Nmg/HM8ejmdrnXbUTg9m8xzZMon+/cf3UZaPTs6fjmfrrWY3CbOz4+DJoyxtLAjTfuefffYw23/eyhYTLl7MlrLdUwHb+/iTrFSD996TcfRk78WSC9XvQ6PByuzs2f6Mc0piNpnAoyezrJCjcf70ebn/cjhdCCFUUUitx6UMhIgCAWgtf7LLZv36xYor/PqJilSdF1FB1J3wqsNxwQdy7pV9TIyJZnmqskhyH5AINAnXjlVnitzVLKCdIvbGgHHKHPfVjNLKgibfW5dltwu0XKmrPxgdrGsbhJvtyBEAzENqtNZIxICQiAEA4xRw0lpqtciWo/nyfLY4H03yxTJoNvnmtZ5UgWDnUv3obPi3Lm19uH/8Vzb7gygYST1WWnC2UOogL+fT+W9e2mgG4miZ/dbDvXxd/t2rW7nWv7139K9/+tEO0d/5/rc3InG33/nnjx4zJS/1Or/38PnfuH3laqcRhCxot17rtvZOFSP65huvncwXB5P5u1d294Lwn/34I0a0zPPtNBZB
 8NPTEWolFvP3d9Y/Ozj5leu7x+fDnxzsiyh6ucxn09n17fXzkyM1WNdpqvLia4PuT4r8cHJWdgd6mV1VmifBpw9eBpyJZmPv+Ly9vk5PH8oXL4CFEKei39WnZ3g+Ut1OO42LvGDZfCFl3EzLZSbiYDmdtgIhlWbEpFKzotRE3EACcXVraXRekUafcXLf+iG/wCweQnVN6N9UC57qUSF7G0ddTrVaBBIQWIBaiHl6rkGzNkuICIhh5bG7s8CWz9VMZnKumdtEh5zpXQEYa1tMoTufyO6IgnaFMJpdwH1ZiKFWAFSKFkWRLbNlKeezxXQ6L/OCNDUQR+Ph2WLRSNIbSbjfbIzzfBAFz2aLv3Jl63/67Mnv/+hn//vvffN6K/39e0+ePX72j3/tO39yePrtrcFoNP3dp8+3W43rrTQGKtJkPjwLArE/XbQbydvvvrWVxHf77YPR5D/uHyHnbDF7rZUMorC52f/5yXC8zJ49ePy13c020D+6tPGHQL9IgkfjxZ/uHd1KwmSr/zuP9qjd+3g4+9EvPr3/bG+xu8Nff+OzJ88PlTx+/mLz6mWxfamUcO/ho881KME7a2tFo6tPR/zzTx/tPWev3YZGozgbrbUa5Wx2Pp1yTXpjhzY2BGJZFuF0BFKFyKaAvWa82Ntb9vsyinvdjp6Nz4m2+p3FbNHvtYHg5XRxs9dhjPmxqVwiAL+U0iBgBbl1LluFKdbR45BX94+8ljdHO4YCJG2fJu/oi9zNhL9abSG8jxSAe76XW+RfD7JahW51/mqzrVaoJ2a9Y+jSAMb2dqf4qJPT3eDMU4EImqSUeVkIjkzKpSzOx+OD4Xg8nJgqJSklaWKADEEvs/zk8EW73+hvXum0302jaam6SfTZePb+dP7Weu+jZ/ufn493uu3rW4N7z/eeHp3s9rpfnA5Zv5PleRoGL+fLWSEL4NMo7SNdWuv90cHpkdTH9x78xuvX39teezlfPp0u7rz37pKxo0UWM
 3yr1wKiA33zt1+c8Cx/vxn/6rVL7W73Z/tHyeHZt7cG7/Y7R8vsoNduBuK/+85X//jgJBHs8qBbIF3rtr9zZefT8eTFcDzY2exzHCKPkuR6HAaCnutd7LYkIZUSAoECjj75JGykFCdquYT5XNy/B2EMl7YDpeXOjiAqigKUZlqzIOw0kjTL5kW5jONCiLVOGIaCEANe3wUJHJV4Zeuw5FVijfLqoKzDl8APtz/Gesyv8i6aZ2lbyJstUQ06tYsYESIKchvUuFbaIuXaRKkZumAJz6wYJjCbjqH18WstqJgXAc2CDSRA9E924ASIdkMHIw8fOK7NV2SMCeSAUBQFap0tF4UsT0bnJ6fH0/FUliUQMG4fOmsWN4HWw71nZVHwN74yThJkPOH4TqvxdJn/4dn4q93WndvXP53Ov7I5eLoUjStXPjse/p/fuPlsuvjTk/FiMn90Nvz69nojjjd77Ww+eTmZXY3CX7u0MSzLB430//v08O1e82ojaQveiG79wcnwlqT3uq1L7WYzyvqM3Uzi33zrllTlH3/+8MXp+dvbGxvN5NHZ6Z9//vlH9+5f3hzI61f+3cHJ7OQ4DgTb3u13msvx+Wd7T54/fNhKkmgwkGkSZhl9cVT2252y2BRx0oqXo5E6HeciDDYGRRgiF4HgshHxQBAXGkAUWUFQHJ9AGLSaDX52StMJIuYcIlmWp6fTZnuWxkkoGpwFgsdCcES0WUpPJaveTy2O7YbEbe+4aoZqupjHNuPnnSTnU1viM/FMUw9l4oh2h1BA//gd24R/9lv/wSwPAkeKZqGFybdWUX1to6jmxuaxG0ZBMMadUnahAADSimwUyHl5aLfXNUYCQ0TG7N7J5gwCG9Qsy0W2HE8n5+Px+XQ4GU9kWTICJcs8LybTyWQ6zpYZAgRC8EBwzrTSSmsETJIYGUPOB5ubV+++HW1dL0SERP0oWgBqgLIoIiH2lvl6FGrGJlJdMo9u4Ow8K07GU57Elx
 rJIAoXpbw3W76cZ3c7zff6raXSPzuffDGe//1r22tp9Mnp+ZPh5FK/myAKWfz48wd/sf/itUGn3Ujm58Ozg/3xwUss8ka7qRtpORrHHJMyu7Q5SDvNaVFmo8nLg+NZpuL1tZjxG1e2H9z7gqTSWqfNxmKxWB8MeuuD8fERaRqNZ9liITgvEHnS4GFIUi9bHb51ydS/8bgxyTIkAMExCBLBaTZfMCaSeBBHyzxXgI1QpEkYiGA7Dtth8Ob64Eavw6CSf50eyVdlQAWyeuiwBuNqcQRetBUvHukvaBCozVIgswlZdbvaM8SIRD2F6FW4Jm3dbfsYdq38VvBuXtnZhUCgmd0s20ZMNWkCrbQmIjSPmHHARQBVm2GGeJXSRk3LUipNUpbz5XIxm56en52OzmfjUZ7lpSyLIi+yXJYlaQ2c8UCIQACilFJpigKRNpIgjgCx0+5sXrume1tLHp3m+XC57AueBmEYJS3BR1KlgqdhECBsRkE3jv7j4Vkm5ZvN9N2d9eeL/Omi6IZBrmQC+o1+eyMKfnY+3B+OSqW/2u/eOzn6n/f2954/S89PppvrOk17XDenk6/K6fTzz5+PJnme72yubbZ4JJqv3bp67/GzIgXB+Vff+WpZSk302cNHe/svl1mezbOgEe5ub1watIOrW6dn40GvNRpP2kn/+u2rf/6DH22u9+OQt9bbx6dqluV6mXOSLYgKQjocZk/uMQRgjHe6zWxJpUTGIQyJCyjydrMVrK2HYSDPz4QGHcWzKOJpQ29uta/spmGAXnnBRQiiz6pXg7USbyGXOvIj61m2ArQ/y0HTRF5tZZ790u0pV8UQVl5CSmlvbc+ynoqpoWNIiKjcE2CV1syt4eREyu4Uh9p562QedAB+vZBSSkqzua/WpZJKSgD7pFqyTwWlvCzHs+loOs2zLC9LJaUs5Gw+G03GZbZUZSlLaTcE1YQAnDMRCMa5JI0auGBpGvabKQ8FRvHm2nrjymt78fp
 PDs4wf9nncEmNh6OzrNnLm102HZdRkwAeI1zdujRTellmAOzW5tY0n//kYP9yq/F+v/v49OiTg0NdZO/221+MRgfTKVvOscgeB2EjYleXc3r+LEK8EzRPz44H7dbbb91+vLf34ejs8o3do9Pz/mBAUr37+vWP7j86OD67fuXy6zevPN9/eT6f7e7uChEwzsusiMPg6s5Gq9GMEAJkoRBa6vffeW9UFE/v3Rv02u1mmhfl1UtbGpDOhtcubWV5cXw+4owrpbjAbjNdZDnMRpyxrMg5w3wx4QCMcyoWej5UaRKSzhfLQlOcNhZhYzIcz5b5WVZ8bXv9jV6bu/gLw2oLLefgVkuIVuJPZC3EFUh5C7YCcVX4bBKe5kTz/ENy+ScX90RnG7odmEkTkVBaItgHWDGG2vvfWnNEs7rKPT1bgdmdnuyGvyanaUpFAQg0mOL2UpayLAslVVlmebbMllmeZXmelfksW0opEUiTlmVZSCmLcr5YTheL5WyuitJuCWqeO0vE3Tp50sC0RgTGUAjOuV1HH8VxpxF306TdaoIQFARs4/LPoXPvi/tXhnuXIiXzJSjZYkzIZXt5Op0uWFEGAZMaFiePG1HcScJ2q4MHI5VlrxeZOi7uPdBRlO6enRXnpyeCzUaTq4MecX4+HCPATKrRaLLe771x5/Ynn95b3+xvb27MlpmSspHGh2dD0nBtd/unn37623/4n5SWTHApi2VRPHr+8uXhyecPnvV6nWKRkVJhGm31exu9XgDUabeOTs+nWXgyWzz47LN2I+m125e3Nw9Pzg5Phy9eHAVxyJCtd9vHJ2el0o1mYz6exEkcp/FwNI0QW712WUpcZKVWQnCOoMsim+luI02bDS0VIgSQqXJS7D39Qspn08VfvXn562tdtgowS4dWQ37JEmRwURrHrUj2gdeV1epVpFf97mrWojDhKyRbSufKUip/yDNoCQACEEWgNBBo/yBRbeo2NCmlbHDVTh2zolKh
 L00iAAClZVnKoizmy/lsNpvNZ6Usy6KYLmfTxVIWWZnn88VimWdaE2gt8zKXUpZKuwfOIENun2pjFijblDworQk0IhcoOAejLBiLwqCRhJ00bjcbcRxjHMet7iLtHcyzG7S4kuhyPGZaFYTEWTfBSOswwDBOwig4G8+CfK5UTjLMF7OlpkajIUqZRmK+mE6OD3Y2NkaylU2nd65dns3nnWa6v/dyssw5MMZYp9v5gz/5s2aa/Nrr3yLS//K3/8Novmg10q2NdRHHAYNOKI6ns29+8FZAsNDqeDjKtC7yXM7noGmt32k1kkYjPj4bCeQ6mz1/efTe++932q1///v/sdtMMylzonkuMQyhKN9+87W8KOaT6cvxZDbPOt1Wr5UEoJeLZRSF/Vaz12k1kvj0fBgGQVEWy0IWUqVRAKRJSS1BE3CBXKqCyTjSfDaccv4XrcblNN5JogpQKxissaFLRroPFX3W9h686MmjLdp3gDZWBVK1nY2PbAISahO5r7xkxoSxQIgbl98WUxLZBymCW1ZhlvdWZgq5DTdriUopZZZnk+n4+Pzk7PxkPh7LItdS5UW+yPIAmdRqkRfLotSlVFIVUhERN1vbMcY5E8yn7BEQTWYTNSEQR+QcQs5LAKlUzMNmFMVR0IzjOIyAYCFlg4s0TuaM3e6kR8t+fjoJgiDP1SCNoyDoJMlkOonDcGtj/fh8FAouhBBax4jLPG9FkSiyzfUNEYTzZSnl7MEXD7bWN7a2t8psqaRaLLO0kbxz906/1/3wi3s/+NHPVFn+5vf/0nA4BMYODo46ve5av9PpdS9vbGitHj58HoRBwNjZ+agoihfHZwcvj5SU/UH3vbfu7G5tkFT7B0dT4vcePSkWs1/79V9f73V/8eM/DzhLk+Qsy8Mw+OLJ8wihncbns/k8y5IwjMJwrdduN9Px+QgB0zgSnIdROJnOkGG71Yzj+PR8hJDzKJRKkgYlVZrGhdRmG2ulyuloWC6L+
 WjO4ni6u0FpjK5g14MTfbWaNzXrxOjIk9w+oy40qV2U08cjrdfjntMBmuwGCcavB7cxGWoAl3x3XpsWBv5ak0b7hHcwViSCBs0INZFyRUNKSSK7gI4h5lyAtSeoLEtZlnmZn4+Hh4cvp2cni+lMlbKUKtOqVIoRlER5UUql0S6yJ0JQBESaA9MAGkAAlQSAIAAER47IGBM+ikEEQAEP4igIo6ARRUkS8VBoLngYxs3mUbr2uxOVsPmNiG80wvOJ7idpu5EA6aPTMyC9PuiGcdxpNvIiX+/3EPH5/kvGeWdjrSjKo7OzxTIDxP76xrVr11SeSa02L106Ozl+svditlhub25c3tl+ur8PSEqqLx4921objIcjALx+4+pX3rjz4aefvQTYXuvPpjMeBVmWX7+0+/OPP3vx4sDE7zhjr1290u92P/nii9licZbJl0+efft73+l3upgvRcB7G5vTvJiNJ/PpvNNIllqNxpNlXiDptX53o9+5vrsxns6KPF9mRZbl/W47y/OAB6BBar27tbG1Nnj0fP/g+KwZx6WS86IslGokcRSKvJAh6AhUPhljL5Bau0eZO2Ss+jpQ24XG56Vx9RFt9lcHYiQDwirxCWjMTASt/fJfN/7WAYIq1O7PQgAU1jTVWqPSQFJKX3nKAEgIEzMqykIqleV5WRaFLMsicwuFCBBLpbI8K6XU2aLIFrPJZD6eyaKUpEutpNK51mUptdJSSjCLQ2zk0hUWasUYAHGJKIkYAucsCYTg3CyOY+bZx0ARMhGIOAyiKGo1m2kc5UBBGK5tbNDgyk9l+mL/SWM5/d7lpIk6C4JIcK1pMp1NZ4s3Xruxu7P9/OXx8xcvACgSnAiajXR90Ot2Wo+e7J0NR8dnw1YjVYCdXu/uzduz+fzTz+9P8/LF8XC93WilycuDw0iIACAnGk6ng7XBeDpdFsV0PN0/OrmyvfXhvUcPn+31d7azxWyyWLx1++a1q5eeHBzNJtO007
 i0u9VI0zCKAs7zZbb3+Pna7s5wOnv++OEH77/7jW99689/+vM//ejTfrcZtVMgkpJarcbV3S3OUANs9jvtdmuZ5VKqUspGmkZRIAIRx7FWMi9VnhetZrqzPsgLeXQ+jAMRcM4Qi1Ii55yxXEoFiKBQq3KxfDlbXmulfFWlYw2XUENh/ScPJheJtD/bA43z4+qaq+oN0ohu32W3sSi5oqpVU5eAQJgrS01EpEgXZVmWpVKyLAsOKIKAcU5EWsk8yxZ5Ns8Xy/l0OhmPZvM8y4hIA+ZFCVpHDJhSAoAxLMqyVNKkJ0mTlrKUCjUhgEbQRNzlOYmII0NSqJEYA4KQs4CzNAjSKNKkF3kBpEUgEJgG4pyHYRAGIk2TTrMxnc8ni2UUha20wQc0EOLNG5dffvHJ2cFep5220uT47DyNwlazubU+2FjfLDVurg0Q8Xh43u/1sqKIw4Ahe/7iKI7CVqs9ycvj8Uwg3Ly00wjZoLt9Np09+vQLu202UiONbl/d3b919Ucf3d/d2uAAHDGJgs8/f7C+sUZRMFvmB0fH77z/Xgh6d9A+GY4eHRyR1qR0Mwg/uPva9s7u6dEhZ7hYZFEUh4y9fPLkm2+/OZ8vijIfnp6urfeOjs/W1npr3W6v01GkdjbWpVKCiwBJK73W743Gk8PTYasZa61ms2WeFXmeh1H08Nl+lhWB4K1GMo/DeZZLwEgIpZQmSqNIKlUQKQVMSV2Wv/9k/7VuczuOtOdI7Vc9OgjWV1vYx6rYWOmq8iewD4Eh/xBEq/cqe7By7cHZojXnDNyzu215gEBkZJ65XqpSyflivsiW2XIxmc1Aa855EAaIyBFBlfNsOZmMFpPpbDafLpZZlhdKm4amnLMwCDjPQZOmoiikCTMBSEVmgy6bbQVAjvYhBwTIMGBccKbRBF5ZFIoQkTMslMrKclkUseABF0EQEJIGUAg5QBPZYpnNF0upVIJMypKVy+3GIKdg0Yjzs+zxZHQ8Gse
 BaDXSjbW1Qb+viRaLRb5c7u3v9zutMs8E0Gg4C8OIAeRZnmfLVIg8CsIoWd/a1oTHJ8fZbNzttUfDYVmUYRA0m+12q722tsbZg/l8gchK0pzzLJs/Pzi+vLuZpM2rV5KtVnN9vT88OX7w4vH58Wk2XQjO37x5dWttsJhOy7IMgiAMuS7L4XB4+85NhtTrdD7++NPZfJmEwZ0bV9+8+1qRl81m4/TsfDJfXN+91EyTxWJeFFlR5M00aaSZVroEtczLl8fn82VmiQxRcB7HYTOJI8aXpczLJSKmOpJSIWeNOC2UEkmiFJ2PJqO83E1iiw/wGtmqcxPkAXBL6WoZbKOj3bN8zOoL7crsCBHJFSNTVaVELvtvyXole2V0qtulg4iEJm1L0ZUsyqIoisV8djY6Pz87z4tMgw4QQsYF50A6z/LFbJ5n+VLKZVGWUhFAwDAWQgDIUpZKlVqTUlIprZQiUkpLbbOnAMamRPNoWq21WQAPnCvGjD2Sci4QNUBWllJlpdICkTNknAWcl/bxSCiISMplKYGglSSDwaC9uf2UtX/3fCqnw/fzUTPkJ8P5bJFRGj05OBr0unEULbLFaHTOGV7d3Tg+Oc+y5dbG4Ma1q51uf75YTqbTHaL5fPH4+Yvz+WKZl5sbW8D5YG3t4d5LIhJCcM7DKFkqdWVnfa3Tevni4Hw8vHH1+htvj1/u7R/s7W9vbrz/ztvNZnN0crqzsdOJoiBgj56/DJP47ddvfv1rHwRxslzOwygw9f8sEMBwdnraa7WIi/7G2ht377QajVa3vZzOF/n40uXLN65fX2ZZyHkjbTSW8+l8JpV6tn80WWRFUc4Wy7KUSmkbrAEATVLpuVJBEAjOdZFrgABgmWcMIEzifDbXPEz7g3kUJXGUCqFddKcK3tvMYG2jjVoyp+7gk4vuAwJoDW6xsg2tG9iRKbcgALelPKBGV0f8SsDfx/YFIiolS1nKspjN54v5fD6bz8aT6Wyilkuu
 lSKSnCFjUsM8z8s8L5WWSmmtOQEiJIxzBKUVIJIGqaSSqtSkpDYBeUQMze4TyMCWh4LZlNY8m9Nk581i0dI9d8asfwqY27fJJHwJEICTJg3zPO+kjUaaNBuNKAohavTaXRyfyNMTORo9PjvMizxIYgVwY3vr1vUbSdJQWidxEgVikWXzLNte722ubwzWt4KoofT5YpGJMCo03r3z2vOXByfHR8046ne7rTRpthpnp8ODo9PpbNbvbybNzqA/uHNj96cf3X/+4uDWtfh73/zmk63HP/rFJ6DZ22+81Ww05WsyYKzsd5+9fLm+tRlE43feegOIsmXGGD84OjobjstS5vMFIF55685ar3d8cDAdnl3b2TGLsLWSN65cKvP86PhwMp4ss6JQdH5+Jjju9FsMdTsJIQnXuq2iKKeLbJ7lpVJKKkQw0z0rinaa9JtNwwpFKRd5vsxLGXDNGJcq7SffvX5pJw61UlWu6BXrE2pOknGRwBKp3/5I2yXJaBeyuWwmeZiaOJTP2ZPWzDpolUWLtQ3hDSsLrXUp5SJbzuazk/Oz8Xi0mM0Ws2mxmDOlCFFqPc9yM8OyvFBSEgBjTJitHAAJoJCq1PZBpdzEoZQtkUPGwkAEAdeMmXVt0ixf18SQcbMPqNbkHjuqiEqAkEHAGRMCAZAxpfSyKAutBeMEUBApUjFApOSg2Y2TJFeywdi4KEUcJoKdHp8kJKMoTuJod9C5feUyY/zw5Gh4fjYaTUqlwoALzqezpdLAg0QE0WBtu9vbmC/mWSkDzoXgw+FwNh1vbW5eu3R5b//Fw3uPs7KczTNCAk2Dta2rly//7OMH9548H4+n3//er6RhuLm5vrE50LJc6w8QkQueL9ud9kCww7/yy7+UBmI8HDa7nZCJsizmi2VeSgDY3lq/de2K0qqZJts77+ZF/mJ/LwyDQb+7zPL9/WehwHYj2dvfe/D8aDpfvP3atcvb671WNF8sc6nDMN7Z3l4sl5/df
 /T4+UtZlou8kEov80IWUsUUhKKUCgA0gCJQIqZGlxifa4o1dQLBkCqv24eWHFJ9vMnkkmw9vFXYJvLoiNvbl+DQqXX9OUYm6e4rSyrCdHUkzmytdvEUWqqiyCfT6dHZ6cnJ8Xg8KpZzyjKlFUPkyEops7LUAKXSpEkAcM5CzgCYVkoTFFLaPecEF4HgjEEhZV6WUgJpJMoBCCEOOTGmSlkqBYicc84YMvPgTRfGReSBMOWljLGAcc6ZMVOkVKVWjHFimGsCwEaatJqNfrcbxkkhJWTTgtJZrqVI+OBSSsvs5GWe5be2N+IoODw+/OiL+8/3D5ZlIaVuN9JGFCRx3Nw/EFEjjeOsKPO8BFCNMHjx8mCjP3jt2nUiPZ/P0yT94O23tJI/+/BTU8NbFosyX671un/7r/7Kh/cefvT5/WtXLokovnvrxkZ/kERhnMSMMc54HEVf/8Y3ci1J6vagNZJqeD48G40Xi3m3074bh5e3t25ev8IZ29/fD6KoVPKLR0/2X7ycL5ff/cZXEHUzDZMk3hh0XhwczMaTXr/7/hu3Nza34jjpKsVFoJQSjMWd5nq/vVhmjSQ6PR8fnY1Kpcu8GI4mCICCAwAyBkzwuKFEQI32RERc6ZYQdaa0oXVjHDrutBCygNKueNRp+1rRKLkaJVDOeaqVnoB90AC5bJF2BRmgNdmqKp9lNTboZDo6m4xfHh+fnp7MxqMiy3RRgJKkiTiXpAuptCJNGjVxhmZHGmOjmOdoEwBnwIUIozDkopBykeVFURrTGoGAtJQyB+DIlCaOLImiKBSylKXJjUoNSMQ4cIYIkWCMcxGIThIrgKJUpZQaKEIRBEGBwJQmZI0gEIhSK6bkMlu2kvhrnfjjvLgn4vHNN1U5vy2gODncOx91uq295/tP9w6yUs3meV4UZ+djxlgrjbVWy9lw0Gkdnk0Vss21te2trTt3bkkpJ7NlXmgmlFSy2+197b13yix7vr937crlIAyJdC
 ONte589xtf/Y9/+kPB2WI66Xa7jSju9QacQZJEnHEA2Nzcvnvz5tOnjz+7/zCMo/lkqkm1GunN6zeDIFhmy70XL37+0Sc33rj7+qXLy+n4z3/+F8+PhhGot1/f3d3cSONeGEbtduvsfKy1/qX333znrbd5EDMRj0dDAAaolC6Pjs8+ufe00UhbrdZskQvOG3G40LooSs0QlQICBIXtpg4izYVqd7e31v/K9Z231zrGdkVbiWkxQ3YLGQI0z7T0vrvFrtP3jget2kfn/dfj/LVNGWv2g0/vO+MWwFoI1j8BBPHR558OJ5PpZCzzpZaSa+IIKAJAVECylIjEOSJxwYg545e5QDtjLA4Dbda1KT0vl4tllmW5if8zzgQyY5dIQM4w4oJxnkZhIESBqPNcmVAYY1xwwZmZe4ngjThinC/zfF4UUlHAMYmjNIkyqRZSKg15URZFOZ3NE0IexXGQrCXRb261QITPs3w8kk9Ys8NCHE32T4Y8ihfLfDqbc87jMOitdTuN5LVru2+8dqvf64og6Owd/JPf+r0g3rt77fIH7761tj746Onhzz96EPHiv/lb39cgGu3OX/r2t2bTyWw2XdvYjuJYnqs8z8Io+d7X30ta7aKQO5tbrV4/SVPOuSu+hm63u7W9OxufHR8dTGazZhJvrW2sr69nWfbpJ5/de/pccNgYdGfD04NnIo6Cv/rLX//F/ecaYHNtjTOBjDUb8f0Hj+492v+V73zre9/9Xqe3rrUu8mI4HMZRMFsUw/PR/UfPJ7P52qArlUyTqNVI0zgadFpKU1GWi6KYzpeax7y7RlECLJDA3t0Y/PLOhoGYtRstKtHuZmM4VWuwTwM0Vpgr/TXVbX7rMWdZevhWVigg1OMADt0O0DaGiqu75Zvovbj/+KHKc6F1yJgABEQym4JzxqQSnAkec7M1KEFeFHlRaOeREQITQSAEcpaXZZ4XeZFneVFK++wmJAIOXJg9VpkQvBFHURAqrRE
 xiaKAs0VR5lJKAhOnIgTBWCMIQiFMrACIkKEQXAiuCMwDaqSSGjRyliRxHMfLPBueH7W77dvt7W/IJo7pxTI6ba2L6ai1PMry7Gw0KxR1uu1eM9FSfvP9N69dudzrDcIoVqoMg/DWrVvXdj/++N6j/SQKv7j3lfDNm1e2fvzhgzQQg8GAsRgAdacnZfHk8cMvfvzzr7z3/tvvfWs8OhsPT7O80MA2tna7vXXGeBTHrljWJpL7a+tJowUMtVJbmxvr65vno/Mf/vwXn3z4edzpNNstfXjSCNgkZHmccM7eu32Jc2SIUhX5ovz4sy9++tHDv/Ir3/2173+fMT6bzQBIiCBJG0cnJ7LIHu+9fHk6FEEwXxZxFANgEkdm6/RSaUCc5AWLG9jdhCBm7a7iYavbeWOtIwxyXBQdXRTI1GqC20jGZIdMYQcZ1BnvxkGTLE2iO9aF6O3WjOSNWqt7fTmfy1c5UcFKuTKgkIulAC3sIzq96kapCYhCLjjnJtOUF0VRytIWdiDnLIrDVpoKzhFQMKGlmtvqT9Jamx0cgEEYiEgYpS0EF4o0ADDERhIpHQJbUgakNAcQCIJzzplGKksZMJYEIZGWRIr02XzeCMIoCnkgBOPdVuvmlSv93mCR50maJnES8iAW+m7KZ6rxbrf127qM8v6VsLXZagh2QqQ2W+nl9X6v19tYXwvDSBEAchEEhBAE4d/7m785/2f/6uGzvdl8cXx8evvGlW+8sb6zscbsA/hYFHGZT8fDsw+/eNpb382z7M7dN9e3Lj1/+jiMk6TRFkJEURxHEeecITJAjaSkCsO41e4209Q8u0RJeXh0/Gz/QJLO5nMAGBfF6Wi2Nmhv9LutRhwHvFR6scxHs8Vksuj3Bv/dP/iv775+lwfxYjpSslwu53t7z/YPj569PMjyfD6bTybTXquVRqGUsihlo5FIqc6H42VZLotCKSUaHaW1DkQep81u52/fvf56t1FKiYyZCJHDUBWH
 h1qwqbI17XuPTct+9pkYFkMuFK+dEkdHujaw6jKb6DwudDEtF1k3jRECNDdBHLfujpmNQpQWDAVnGmhZFFlelEWplEY7A0gI3m21Ntf6RDQcT4ssF0LEQaCk1JqYyQYwxhnjCAhIWiFx4+4IzsNAMMYLVWgixpkg0giFVpyzNIqiOMzLEhHSJEKAolRZWRaktFLLPIdAsCDCMARCSbrRaCCy8WSUL5dbQqRB53Yz/Xg8XhMh27n28/sfbu49v7LWu355O0G2vr6ZpOn+i5e9bqvXG0RRLMKItC4V9QYb/7v/+u/8i9/+nU8+u392cn5wdHLzylY7YM+ePm61e81mK5PZx5988uG9p61WtyyW//b3fvzg4cO/9/f+wdUbt8+Oj0LGojjt9PpBEAjGzMIBk0ojgk6332l3ZFmkaTqfTfdfvMznyzAMpVSz0QiI5kCn56MHfF8wxjiP4mi9371x9eqv/co7b73xVhSnJydHe3ufnp4Pn+/vLcsyWy6fvjweNBMGsFgsu83GlUtbjTSWspRKDcfTk8m0yEulZBnEMNiUSavIchGlcx6sp8k7G/0AmfJbHxlXxlKd5UsXUoeaCieya9vs4g1vkvrYqFnDSVjVgRhAe1fJl0Q5LrbBArJxKLSPWAIgIKGkVIgISEiC8ZAHjHMA4EwTQVaURVlmRZGV0i38RSaY4DwUIuAMkTjnQSCyLAsED4MgK0smlQlbcIaCSJaSNEWhMMF5xtBEeUoli1JyZGkYlkIXWmuiQARRGABjQRCEnMdRpLRWtBQKAwxaaRJGYa5pXsrZdDocnXNOSlMcJePxKA7EcjBYUDJXwcEie342Srvd3btfhUc/bTfi3c3NWZa1u/1mms5m8z//6UeXdrc21tcbaSOK4m5/g4uw1Rn8t3/373x+//7PPv70+YuDn3366LNHz9vtZpqmoRDn56PjszGxoN/Lnj1/lufZaDx69+13Xnv9zVk05EHQ6nTSNDY2iR94R
 CzLQmmVNBp8eEZaTaeTg+PTQukwjERARZEHQdBK404r7baa/W4nTRs729uv37nb6fbm8/mDh/dPT09Hw7PDk5Nllk9n81lZAuPrnSZTajiadFvpG6/diKPw4Oh4/+B0sVwi40iaVIlRqnuX8iDWRKwVLqOYgvCDrUFDMLtEAlxhkSYA0OjyR+i8bOeMg9PgBGZz5QqeF7LzVsVXBSTg0KixngL1J5CvKfF1JNYJE7lUGiBENJvGRoEIAqGVLgoqyjLP86IoCynd/olIDAWKNArDKCzK8nw4jqJAK2X6RESkNAdExs3m3DlpxnjKeBSEXAhkGHBu1tAFXGRZwRDTOCq1XshSaR2HASIulzkHxIRzzuIo0gCKdFnK4WIZKcUAGJFZx7LMsqIoAVicpmmcKk3rrCCt3mrGzy5tb8fJrTTak4vR9LCb55PZdDqbbm3u7u7SycnJTz7+ott4trnWXet1WieH3e6A8YCQ3755+83X747Ho4OT46OT0/lyeXR6tpgvgzDZXONBIDhjKkrCra2drc37D+/duv1amqaL+Xw+m8RJHLrAjYkYSqnLstRKN5O00+loXYIqTFkjB9AIAedX1jtvv3b19dduDza246ShlJ7NF188vE8ERZmPR6OyLONAbK/3F8uciPphGAb85eHJ6WTa6zTfvntrc63/yecPv3i0BwzjKGrEkVJqHMezsCUBismIdbq6O9jttb9zbeeblzZMEZyBoS2udKGjClPkYkuVUehodCWf5DJGfim5NxGoxppk1qXZcGdVPuLpuYZUv/xJMGQCgQPEIoiDUDCmlVpkxSLLZF4URVm65w0CAeMsCEQSBlEUxVFIQMu8WGY5YywKhFSKI3BEYkwASKlyJRljwOxDwky6qCjKQpYI2GrwRhopqZFhgLzFmAnpGUMCOQNEwThnjCPjyErGllmeSdlqJN1GI4lCxoRUMJ0viIvu2nocJ9lizri41GhebTY30vgX87IkosHOsN
 FqqglNZ4fD8635NIziwWDQbaSCY16o+09eaHqOhBpZVpaMi521QavZGAzWu+3Ozsb6+3fvtNrtKAyBdLPZ5pw/e/rwj3/8IRAxVKAKxrlWeb6YzecN3mpxt+rVpCzKImdcrO1cSRuNxXQEpJMoVFopJYNQ9Nrt7fXe1uZalKR5nj14/HRZSMFwNl92O61eM+Wkx5MpAyqKcp4V/U4zK9XR2ehsNNla79+6emm5zP7khz8fjqcgBBdcE02XWSGCIu3LuMWJxSJcNNuNVvPvvfPa24OWts/F8LFO6747l8ijzmpdr5LNr7YcqQqLmrVrNude51FwgSpnD9i4fc30rEHfpuy9WUEAIBKzQJMzxpiUSipVFHKRLfOizEupTZEHACAILsJQxGEkhACGZu2Hto/owjAMmZQiECIQZHZFNAs47QNLqCiLrAQkEIwjw6KU56NpFIZxFJrWFaUkoHYYaKK81JyY4BwZkq1W5rIsA864EAqw4Gyt2RgMBgQggfqDdWBcI2s1mygEMZ4myfsxa0fF42UhARut9uNRHNJxYzhuPn/ywZvv337tjSAMHzx62Gs12u32+WR6dHiyKPIwCEopi+VCKl0qNV/mUlO/09pe61+/cmVne+ta2mp1O1euXH97OPzxZ49PBZyeHDTb/SCMhAi0UmUpMQg4Z1KZ5zyiCAIRBDQbHx8dnhy9jKKw0UjNyqxWI71yaTOOAkXs6fO90TSbZ3kplVQqTeL5fCYJDk/Ollm2zItBu1USHZ+cTsfTNIluX92+cmn7fDR7+vKIEBuNlCMCQ5HEQ+JjasxEpDUIpEyEZdL4jWs7d3pNpezTZRjaLb4McWinbpHQrn90j0YgW7hJFd25Pbh9QZOPENmNh92xRABGszNCMFstmZXyPk7lZ4UDNyJp8/RNJoihCEQzjUqlZ8tMKkVSyUKWZVlK6ScK54wLlibJ+qDXTOOiVOaAQDDGWByFYSgAKImjspSkNZAG4L7RpZR
 JFEVhsMxyAkriWHBuqvQ5xyQOT0fTWbZMomhRlouiEARJxBhH8+iPOAozKbks0yTBQDDGm41G0kgbjWar3Vkoef/h/fW1tfDS1SBthWEYx2kaJw0ukijuz7NhHh8tM6m7i8uvNYvpXM2/ePTF9cvX37j7zs727pNnT6aHB1EQ7GytS6nSJELGxpPZ0dm54GJr0Jgvl9li+ej5i+f7L/u99uWtjffffqvT6TaaLUbqP/zpj+JQ/Pqv/bUgbDTa3SCMyrIkojiKzJOdGGNRlJRFLou8LMvpfPH85cH5eIqcpa1m3Gwo4MDD0TRrNptbm51SytPTsyd7+804VArSJO60GuPZfDJfjMfTVjPZWetfeusOIpVlOctV1Om8s76OnJdSglRBFA7ny2f7w6kGPZ/xRhMaretrve/cvPyt7QE3qCACArMPmNFt2kXTbVCSmXXrZLFrF0Rax4WY21KZ/JcubumIcBVyaJU3+WfBgLbEbbzp2i4SCKBtCh1ACx9r1ZpAkyxKVUpTzImamA07oeA8DII4DJMoSpKYIDfPfmPIk1CEUSilBC0FxygMi6IEpavNShyNB5xJIRhiyJmJkHHE5TJfLPNlngsCXcrJZJpJRQBSSq2p0UjjCKMwCAMRl4EG4IxHSRInaZw2lmUZ5jko1Wq1ev21brvfSJsiCKIgYpyLgG9E4XY7PZ5l/8P98U/Pxq2g0dncGBfT2fDw/NGD13Z21vtrX1vbHA7PHjy8d3Z6XBTli+MzZCwU/Prl3WWWCcZuXb00mU4WWW60Uhjw0+GQiVBr2txYOx7PHz7f/+rJ8fa1O0EYg6sSV0qZrfUZwzAI0rTR7q21z06KstSAZSmTNO11O0kcD9YGu5ubWlMYiIPDw363szYYrA36Usmzs1EYCNB6d2Pt2tbabL6QSm2tD6I4ZoyHTZamjaTRjMJQCBEEotT044fPPj3NZixhACXwZdJs9/v/zVfu3O6kUmm3IsOqVJfStAkk
 F9m0jyFgTvGiY04bM9VOudtfccUUdTFOs9bXFX+a/UIQzO7AJrBktxAnv1sjeXsDzHVQJEEQBFyRVrJkSAhUSqm0AgNsRMZYGIpGo9FKkzAMlFaT6VxrzRCSOOIMw4AzhELrUirQmnMQgmnFUGvBuUkpcM4LKfPpjDPebqSc86Isy1Lm2jzNjIBIa5JSMc6QQBItizIuyjhWSimgoJMmnPFRlsWcN4MgkGU2mpwul7rIhRBr6xth3DBPQfaPQDMPA+GcDxrJN9Z7B4tsP5fXuz2gzo+1SJbTfJKvzZ53A7bd77/15lvHh3v37z+4cWmLiWCZZS+PToui6LYay+Xi9du3AHC+WDKkJEmbrc50kSuFUbP7/lvpjZ2dwfbVKE6iMOQctdbmvsiACq0BASAIo0arc/XGbRGIz7/4vMwyFoSFNlngQkkVCN5I4sFgsFjM+73BxtpgPJ3O8jLLsyjgj/YOllm+3usMep0gjNJGK24kcZKKMIqCQARBGARpEP3RvUd/9uwsU6AZQy0LQBknX9lZ323EZSn9okskF9OxhGdD9I7rwJdtvLoLQ5UNclamJrdpElZ4dlFL912tGMXfUbsKEp/uJxdh9ZcXaRqbBdFKk1IkpVZKAyBnLBQMAJCzRhy3G400jQMhgkCYwATnLBAcEaRSUkoE4AxKRRwxYLwAaboXBgHj5rEIyBA5Z0IwAiqVUloLxgRjRFSWUgNJJakEHohIiJBzEy4lIE2UxkkUJ+W5WmbLWbbsNZvpYIDIsjzvdDqETKkiYFwwIXjAAyGECIUQXCBiGolfvb7zznr36WSeMn5/NHsskVjzWr8/oTxEdV4qmc0VRrfu3M2y5fD8/PD4NA74Zq+fxmGr2cyyRac7SBvNQATIeRhGZVEIIQbrG+ubu5vrm0GchoGIwsCg02wdYB5Hhpo0IwKKoki1ujtXbk6X2dPn+71Wc9DvP3m+LxgOh+e9brcsy267vTEYKKVe7O+dj8ZxG
 GQMO+3219/pLpZLBGQi6G9t9drdMAgwEFyIkIkwDE9mix88ePCn957OxtMoSYK0PQui9W7rl6/tfP/yugCtwe164BBWcWkNg1U5nMv0XMSo2+HLsW3tdBsBUB5gRGif82qPMgVw1oNHxtzTFldaUd2USJSlYghFWZalKopSSUlaAxBnLOA8CAQXIgpDHgiNEEVBHEetNGmk0Xi6mM4WjSTkis2KQnDebKST2VKqAt22KUrpLC+5wDAMm3EiBEcAKdV0nmmto1A4W8fk1YAjk+i2M2VMCM4519rmscIg3FlbW5ZFTsS5oEBESdJqtjqdPiCejofnozMUQgRBxCJu8w22poUxdrnb2u0081Ii4neW+bSUgzQBikZSPl9Ot6L+7e3rnTjI5tOjg30M0pBjK42iQORSLubL0+EoEOLq5ctJo5PEyWQ8bLQ661uXe4N1Yedq9fKCNirerOIXQjAAQtzY3P7at/4SkCpm492NQRxHjUaj1UgbaQMZV7Icj0bLPAsCZsyDRqPBGaSNZpAkzWY7abYE4xwZFyIKglme//aHn/3kycuTyYITYRyp9c0pD9rt5n//3s3X20mplNlmwCfdq4Q4QR0ZxrmhVx7ZUYcsur0U7ctlgcgyY5Vk8rGnyktf5Ul0G+RSVUxSJQQsgy7zXEpVFEUpldndXQNxxIAzxtBYB4yxgPM0EEkUCs4JSCqSSgWCM4b5oljmRRoFSRwJIUaT2TIr7OITTZokAYtjJCLSVCqV53lRSkRMQtFI4rJURb4EAEWglUa3mC4RnHE2XWStNBGCL/OCcx7HsSY9nkwyrUnwZpJmSi2yBRCAkkoVSpZSlrIouQlMuIcy2NAbYhQGdze7m610nOVlqf7J88OfnZzvRmEqdNqg87JoBfHV196+eecttZxl80mRL7gItKKz8ZiLsNVpB2EcRclWu9dsdZqtVhIFRHY3NT9y5Ha7MMoO7eNNkDGWLeZK69u3X0uS9MWTB0mcaF
 VMpzPGxXyZtZqN8XisVTGbjJZ5SURRFDXTWCpM291mpxuHIRdhFISc80zKF5PpD+89/MGnD2WhkzSlNC2TRrvVvNZq/N07V262wlxKR2UWMpqIedq6sBWtz0z6b5zC9Y9hJecYgUelx1dlJfhtkMDFmHzk0yWrajEpD0p7fMXmKMwazbKUiohzzpnJypvHwHDGWKfZ6HU7jKHWWkq1zPLzoWy3Gv1ui3OuNCVpI4rj8WQ6mmVpyFuNJMuK5TIDIrO7PCJxZER6Os+WeaGUCoQIuQBAxrjSpVkehVpLpYxzKBii4IJxRRBFYRRFWmcAtMwWw8kUlN7udiMRnI9Gs8WCtO51uuvrWyJMSOvFcg4MwSpaBAROYJ6hDAAILA3DJAg3W/E8K7+fF5+ejz4cTr53aTvH8P5k8uenZ3ca6dfWumtR0G302o02B5RKNjt9IijypeA8jJNGo9Fstkw4QrnHibo13NVScbtyABFAm122GQBqzRgDoss371y5+fpyMjw6Otg/OjkfnXXH55PpjDMYjsen5+Oz88mdm5caEev0NgbdfqPZYkIwwNPZ/OcvDp+eT6bz7Gg0K1mkG0L3BnGv/feu/f/o+rMlyZIkSxDjRUTupott7h7uHhG5VVZWVVNXD6iHgBnQPIBo3gEC5g34BvwM/gGfgEcQDdEsBAwGQC9V1VWdlZWRkRG+26aqdxMRZsaD3KtmkdUwD4rwMFPT5V4WXg4fPvzVby63r9oqIEiS9eaDmZZJtMXqcO2jL4sCz5s5F2P5qWtdqypbS6G14F8fDwa6eoInU376zvk5f5oUnL8DzxOJ818QnIhIFjNTs0DUhjrllLNUVajrqqnrF9f73XYbY+z7fhjHw6mPMaOmytN2u62rqmubKoTHY98Pp6E377jUirLMlhISZRFEFDNEcMQIqAgFUlYDdq5sMiRGWUIFRrXDNAHYPrZVCNuu8T48nPrb02lb1QTWD0dE2m02ZspIJprGU0Z
 smdM8ReYybksMZeaPCNakHAHRgdvU9F+8vm48/4eH056pYni92VxO+RZgBP9/+3h/ezr8V9fbb7rmZ7uNxWkae4QyEs1t1zVN45gZ0NCyloKddP0qZZ/kDIgqkkUkpTgODjE4V4y18lVVN5dXN9dffX3z6cPf/Nv/+e7T+2GaP9097rb7f/WvfnE6HrZtuHlx02yvBoPvP36eRFOW//jpy8e70+NxyEiz97rZpN1l1TX/21+9/a+/voKyurTcZX0yBV3ReAB4GkxbvSKep92f5G7OXPlnCOjq7J5DRmuc/4krPLtMePZry/fXTtEZXIVnwef54x2AiSkCsFkZ4wRGw+S93282222nhnePB0fETHlSUfEes8rh2BvA5QX1g51Oxya40XHfD+OkCxRa8ClA77z33gePzI6dihASe1cXWhoiOTYBImYkyQIAjGgpZQBm6scxZgnBv7i63LTtpqpCcOTctt2GUDVVU9WNCyFUlfM+isxTX9eNI1LNBm7dbA6FUbMm6GYITLRv6v/q65f/xVdXWXXMcjvEl7X/3eMJQP9st/t3D4f/6x8+/+cvb/5Pl9dX2yrnzM5V7abbbB17Wq8yEeGzNU5FNa3cGCJWMBWdx+n2y6fD423p28XH++bVW2YHgCbK7Jqmy+S7y5ff/Hzz1z5c7ff7/X6O0zRNYvBpmv/m999/Oc1D1MeYT1GOx5O1Lex2v7y5+K+/+UoRX7Xh266SmP/UJnAdVl/xyZ/8+MlOFgqIlYJqJcHhefao6CSWeLBu9LRzzf+svD9nq+dS60986tkuz+ntT9/RU4MU/+pf/2c5JiYyBO/c5cW2bmoiRMNhTrV3SbKZtHUVHKuqc9y0TUwionUVNm1jAHePh3GY+2Ho+2GcYoEARQSJC2Llq+Cdd84BgKkiADM7z6I6z6l4NRGZVdXAOdfWFTNVzjd1aLrWISFTHcK2bes6XOwvjN2YIhEz8367vb56udtdOKJxGoi56XZV
 VbVt530IvmJHRMjEZwNVW+5ByYzLPTKzJDLFPMQ4xKRAd1P8/eH4bVv9q9cvGMQMkNAAq1D5wgssI6qIolo0L8qstYquECOISBynT58+/n//5t9++PTx19+8veyaerO/fP1t8JVzzsyyZCISyTnO89j3p+NxOM2nByb7YdB/8/EYqtrU/vD+8w+Pj7jZNCFQ8JuXL9/suv/ml6/f1F5SFFEp87PnvO7ssZ6F1zOQRM8IdIsRPWHvBbSk1XHqc2dXhnIXaPOnVCZ4eob15VZ5nMX6n9niUm+cS66fBv6C3jsCLLl9XfuuK+0ev9+0TdM8noZxGAwElFUkqjim4KpN2zZ1gwgx53kaJaUUo2pi1F1XNZWTnOc5ncZY3piqxikmSiXfR3a2Fl5N8KYwx5iLwiIAOmbHSOSd984F7755cSkKv3v3IaesKlGaUNUxZwDYb3fsmENQYjH17HeXN95XROwcEzsih+eW7/r5yxxp+aYhlk0CpYBlotr7XVPFlGeRbeWvKhdTfJimi6aufEkVClP+qVwoUr9cemNyFrOyJTE1SPP48e//X7d/92/VwH/zutle7l++9t4jURl7LLO1atb3p9PDl9PxkCRfv/rmwxDf3304jPnd5499TC+ur/7Ft9/sN81fv9i/6OoXTbP1bCoxRhF53tdZzcbwJzd+ocqXHy5nSHUZdysx2sDOwNCyz1rPiaeZrH3IpTyHM5z6HMuEp9rombU+Xf9ztPmJZf8zVwoATiQnySBgYESsBsd+fDz0Vxe77Xaz23aS0zBM0zzN0zTFpKpJtKlHQ1DReR4rR7vWU+sB2pRlnOLjcZxjVkAFJUTnnBGqaYxxRGyauqoCIqpaElsELYiyqWNum7oIh1TEjsv7mUqqAGYxZdPps936ENq27XP2RNL3OUvgV5tu50MFxETsXXDkaBlQwMUsVw+6gEErOL36FaOl9c8VEHEOTJvKl4iDVPaewRnVFhE7a/QjYvmkZjFGU
 5Wph+lR5lHjIMPjXj/u6/z6z//zb37zL0PbMTsEMtOcc3knagqI7XbXdJuLsf90OPy3v/3+//39+3q333399n9/uctgPvhf77tvL3YOLcdoBiJalpg991tncOccggFWJQZTPa9dsfWzI61UtMKvo3WebEWOFttRe25ATz96+tLnNvccpTprkDxXznkOfP4JDQ8XSVCX8qKxkFIepgkASgU/9P23377pmqv9xe5yv308HL58kfEY72NOd0cz846DZ+/Z2uqi6rZdE0J4PI2H4TaLFIUwAiBmdmwAqiYqZWJE1Zo6AELOOaU0pVQkx5tQX2833jskSillk02oTsN0HMZ+HMcs+67dbToX2NfVwzTaPF1d3wR2TG7OklL0VeXY+TX4nj/tchWe2eh6J88/X442AhoCOfTK3rkCo5R9dilLVq29R1wWlqrJOW1YdH2ZQwiZSMbD9Om3px9/9/7LXR/zv/njY/fmN/+rv/5ft123CAIAGigVGFi15ImE+DjFv789pZR7F/76z3/19dU+OPdmU99sujYEzy7OUxIlHyQmXbEtldXPrUofsPpme/bZy1b1J5+3atTDsytiyz7jc320xnAzwJ9UXWcwAM4z9ef04lkqCQBL5loSvAJ0rK+x/tbyeFsxWlM1RGeiTOSYpKzEEgFEXwUzBdE5xpRTzHmeJmJu60BE45ymOXrvura53G+JcUpJhxiSjtMEZUAewExs4cIiOzYzzJhyUtMpRgCrghvneZ7nLALMRJTATjFuEKtAiuCRN03Tj9M0z/00j0kc04urfVVV4N1ls/E+7Npt8CGKfL6/q6qmabeeHRKKqYoxMCIy0Pkq/gnA8bxNcjZeW7zpMhZgBqpo2ZjAg0s5JZGCyzt2pfnMzFYmxBEBkMi53Qv38/9y5lcP+fcc/P/mX3/9s5//AgBNFRxIzmpFLVBzFjMTk2FO/3D3aEg/TvGvr/f/y59/TQWmAphE5iwofUQKvvLBm6mrUE
 cxMwNDwpVe/FMYp9xwNABewaDF2tYEAJ9V6XbGJ88t+OfX6yz6dfZ/9icQ0k/h0jPXCcpg8bNffMqPn52f9VeWH2HZ8uEdt7VPYqfTwGaGVFV12zZZdZrGmCXGRGg5ZUKovGubOmZzjKI2xtQ1gYiGcYoTxJRiytOc5pimKABAqsEbm9ZV1bW1qU0xBc8IeOwnUa2r0BJlNTXrnHeAKee6Cl9dXY7T/HDozbQO4fbYm0jtvahNKVUDfH/3eJqnfde9ur4JVT2mNEyjmb568bqqayT2zpV1dZIVmJAUFjbikxN9ss4ycrAe65VSbss1ZPToTDVlISRHJiIAEHMyfXq24reZCMBE1MjdfPvr/auv52EAg/F0ZOeWBjRYTinOkwHY0moWVf1q01SM/+U3r4go5bTyiC1QYdMyqsVpBEQXAjnPdQ1xKpsqFmVqhCXXXM+gLYNrslok6MpFUoCVybSCSvj0uz8x8sUB/7QGen7az50nXSdCi8iSrvSZf+Yd1uPzU1+7Pn+5I66E/zHKPM9JBAG9c+wYAULw3vuH4ynH6JhjKuFYg/cGIIRiMIxzP4ybtmKmrJYVH09zP8aYxMBKnatmKcasmiQTUhGzGeY4zjMRBl8jUZaY1Vow59gAKu+C9+OcssygOsWoYNttVznu+8EBTTDdno51VZ/6McDd5X6bU/qcZ5QkOb1587Ou3SCgmGpKhAjgGdBIi6h5cRN/0pb8yd9XCHAx3GJUSyWUAICZk2QoU+OmKUvKOaYcswDiRddsqkBoKpkBqiqkmFRyShEAytCgmJUiScHMhBBaz7vGm9nYH6umY3YiYqZEzAgqeTktiAYWx9Fs4KqqNjs5PKhKoamVxPOc0i1TFiWs/0n+95wyt4AbS2IA//zrmRn9NBNdje9pz9ZPSqP1kPynn/L5v89/ef4qLqacsxiAqjCRD2G/7TabRszu7h9TakwtiyCAqExTfDgOjOS9qypfV5UBJDWiZrvZpKx
 TfDRAVSOipmlg3XNvavMcU4zBeefdlFKMkRCDrwxwmOMcUx1C19R1HXKW0zAOc3JMSKhIWa1mDoSFGHCcJu/4zeVVUzfvT4/34+Cb6mevvjLmKcXD2L+Ic/SO1ZUYXSSYy709ry1dGymFX4uwEnDWS1pu6PMEoCwsKQ4TdV1wSoimwoxmnFUt5ynGL6ee2P3m5XXwFYdwru6ziKjYIseiAkk1g2RclpOjipRIG8eTbzbsHIBJnJmdcz6ntBbLiIg5x3gY0bmq3Uz9oUDITzf7nGqDncP6E1b0nAXyVPuvlfXT7z5Z0p8Y0/l4P9n3swv4E2N99n179rpPYf1cUK1GD4tLB7eoQSJ67+u6Ct61XV1V1TzncZol5xB8FeoquHg4nYZJsjjHAJpFUspV8G1dEWJKaYoppdw1VYoxiTJRUwUgGscxQraUVCwDFpkxQHDeOe9hcfJIRGoQYxpjUpFN21ShBbBxijGnnEWdtCH4uvrx3YfrrmWik6S2aVTs/vHovPuLX/z6Zbc9jv0/ffcPX7/52f7iCpC888ysYJBVCQtCdMaqEZD/tMW8LHt8usSLk1lqj+W/ts5IFAo5oGOuAUBVVP7N3eGHMX11e/jXl1002FeVI9xXTGZieppmBCPNc85q0PJyYM7VNxJZzpojgnnndZ41ZyNi5zUnWyoMW1DXcc4p1t12Hk45zmcDWe7xWiTBuj4LdNlZ9RRe4dkne2ay+J+quP/ETS5G9vxsPy/V7Se9ejv/FgA8yTTj+QiVN4DP2vSufKtMZgbvEbE/DcfjKGpMOCC8vL7Ybbo5ZyTynsvT5Gzeg6jGlEx1inHTNuQcEW26RtQAUUQNQHImxMp7QkwxqSqTg7KHE7BspSGzxlPFfOz7snq3qiomFlm22agIM9VVcMEzQMscQuhTvOz2L/b7u/7YbS7fvv4659Q/3k45TYCB3t3df3n56u3F/jrGRJSpiEgWimpBU5jOF3ptsix1/p/6jKcj
 jcvC6me3BwGQEM3YqPb+b+6O//Oh90T/+Hh4xfr9FBX5v388/pfb6v/w9gbRpjkxU60xiCA7pIAKBEiAyCQigIjOaRZEyoAUvOUMZiqZvJeUVESl7K5SAMhxHnKuuo2Z5TQv1c5P4/jySc8+71nyd/aKf4KcP/eUPwEvn2OZy3qPhS5z9n/nFupT8vDc0FcsFs7Txk+3oPxweRtODcCUVFRojlHVUs7MXFUB0AFAVmWmCh0SNnXtKM4x5ZwIzZTBOyIGoKyGWUSECTdd44NDs3GK4zSnqGZlVcgCthXV3PJeU85ZNXhfVX7KmR233hPgNM2gIqJTTF1dt22zbdu2CneHY+29ZiEEVL3aX/zFr//ix88f//7v/v2ubX/17c+96qfbL/3D/as3X19fv5qnYSZyzjdVretlLUwSXdZFPiVIhShz3jyGz6KkWfETVnrramWcwyGAmBVWChL8zcPx//JPP/51G+7NDlG/3rV/cbX9NMt71e/j9MMw/aL1O4ZCMDRENCWwlShgZlbu9NJKIFLJ7JyAGJS9P8Leq6lGUS1NAUNEVZmHU2g6U805PYvuTyX2k6vDn9aK/0nk/HmOuJjPfyIHff7M9uxXllPx01ThbNn2TGvkWSkKz3xBeaQ6RhOFnDVJpDkBoqFtgru62LVNHTyHEGLO85xyygjqHBfQNKs5gizq1LZVqOsq5ixJh3E2haapnHMGs5mVTiCYLSo/qp658N3LB2UiAFC1yrlN2wKCaFl1h0l0ivPN5cXLywsxm2OapjmpxpxCcC9vXjSb7T/8/rcPDw9MBETv775c7y9ev3rddFvR/Ic//Pbm5evrq1d1qHNOMUUicqX1icDEBaImwmXh5wqYn1O05YIut2BZn6cqktMiwAQIVNaRAiK83jT/x29fvQ58Efz7ab5pm5ppV6X/c3s5ioGJrt2dxVkgoxkSnVsva263JMdIBGbIbJKt1CIGRFSq/udmJDlP/bFqOjU9Q/e4O
 rY/MaxzWY3PCEf/3PLKd/SntOXnR/d8ifSnRmk/bWL95JntKcdYar4/qczMEMpaV3DnF1109wi9D0Qcgt/tt4FJVO4eDsdTn1M2FdWi5oPM1DbVHGWe4/3DQbRjQkfo2naeYozJnOaUwZQIELng82qaUo6iVfCOfYwp5VzsJYvsmrYK7jTNlXNNVT2cTuM0eefrphHVEHwWNQBP1Hbt22++3u92f/junwjx1cVV17ZieuoHR3R9efPhy/v+eHzx4tU4jff3n0+Hu+BDaNqqarNkMHPOMxNTEThjWIC6Mz0NaCmI1z2OCudpXVVBRCKGBa0D1AWp/qpr/3c/r2LOIvLtrs0iamaKBNCSLroyZb+UKi2eA1cMdgUOVo9ipoisJsxcFqmBQc7RsFRsT1VR+YuKTMOpbrppONlPgzucyyBYq/pnEdz+U2X4swACf2Jnz//yp988pxPPDH1F4J8euqYNcnamuLQMbM2jABDc+vaW9xB8aJqanTv1Y9vU3LUx6zTFOCciZOcsC6oF75goxtSfBhGt6goYL3abtq6c901dDcMwxwQI3jvvuESWmJJkFVEtRSuloj8OZoTIxMMcYxZX+kDEu64jJMd0s992XWuAn77cmhoEt7m8PPVj//iYRXzw1xcXTHh7f5hVfFXffvngiH/1q9+4qvnx3R//zbvvX7/86te//ksFNDXvfZYMgN65um7qwGAoubD40QyQjAgXPbbiHmRZAl0KbVVl9lg2qpSiCklsWUdmSMSsYAWHN8kgGQqOCbIM+C5ZAcIiH702s57d0eI/EUFFDQnLqkIEKEGc6LkHPduf5jyPg6uqOI5PdnNO+NZHPuXW///tcgm16/88fy148r7rz/75V7HdgsuuvBRcCqeVFbA8/RpU7CyKvLycqyqfs2jZjVTOlSoaSJbbu8eYEhEBYt02lXdt2xDRPI2HY386DZIFCduqqqvQeN9WdajCNM0q4p2TnHdd470Dg893j9OcVM
 qlpuBdCL7yzsxSFlObcpbj0Xned93Fxa5tmxhzziCqlXfsuGnqw3E49H1m3G12P75//2q7/ebtm8/39xfdZrvpTv345uXLm5tX4zwhNGZ693D/3bu/ufv04e3bt7uLq7svn9i5bbcLdU3OO18hoimoWRkUJuaiRBy8N3zyNabPTz5kk1ITMJKBZRUEICRFxKJVbFr0xtY7sCwJBwAgWgzCAEyRGJjOFrkUYmaABEiw2C8qgErGspDKVERyTshsa+b3VHQDGEAxXx9CnOcla1yhh+dJ5BKF4ZmbLAH9mcnCszLo3Jz8yZIk/BPztBU7sNUfnst8xKcmp9kqg7OEnnOGQ7gUS6aA4Lz3gGh6pt8Alu2DoNM0pRQ3XXtztd9tuxBCU4WU8+3dwxzTiupp8ME5J2qH44l6RMKymogdq9g0pSmmw3EYp5glI1IdQlUHJEQEWYhAmFXnaa4tBBdP/TjOaY5RRMukxDzHUz9mkV99++0k+o/ffQcASNQPQ/But93WVY1I7WafVU3leHzo2s398XD7+cMvfvHLN1+9/fD+B0kpSmqr+vLi6ubm5eWLr3KmmCMiiuSStIhKW7dmy+7dP7l5VpryOT3P/XPORGRPHnC5E8GxY55jMY9VMw5gqYfKUB0TIi0M6oWvoaZGCFguEK76mrYM8Zb9vJJTGbnS8zv7aUSO8xTqmoj1zDwqkO0/QydW612LFvhJunnuaCxpYvGChGtg/6n80tmnmhmu45qr08ZlKRIuj1yxvifnamsf6+ltoWu7ulz1OeaSJprqtq1D8LcPh2makXC3bRihDc4AjqdhGoeS5xJzFcK2a4eYYsxNHerghnnuh2gAqtYP8xxTznkYx5SzAniCUnk6IAbKkMt+PAZMoimlw0mGYfSO67q+vLzo6qqua+e9iDrvFPHh/g7MJMs4Tv0wusrv2o4QQ1XFnIbTQXPabvauasLh4X/x1/+ZIf0//6f/cRrHum1eXl7P0/zv/+5
 vfv6LX445vXj5FgFOhwcFaJsu5bhpd6aljWmIBPh86LtsQxEwOCvbqMhiAbgkfLisMUFCzIpMWSEX+KZw+emcz3FRElyW/5bKHckB6vMArypLhgqmAMV9iirquvZgNZaz5yv/TvPsQtCoiya3mf6pr9OnxPeMjuvq+nBFJp8kZg2QymPO5rt0n3A1cQOAZ0UPnh21mpbjtiAUxZSfoCU9E/cKVLtEfbdtAiGa6mmIc4wiQgRq1rX1OM0jWOUYNR9PhxxHx6wpgWkd2Ll6jknEbh8OKQsRT3MkRFObYxYRAyiiWVNMKWezMpkEWS0vxFoBM0dUh2rKcY6gak1bFWYTMgfvnPPOcUppu9lcXVz88cN7NGDmvh9yzlHFJPXjeH152bXd8Xg4PD68ffMN180PP3y32+67uv3db//+ctMN3h2n+ePtF0d0eXGlCGL46dO7dDoS49ff/ll/ug9VVzTzUYt5aBlqOk9xmC5pJYADMFERlRKZVcW7gIi67ODFcqOXmT01tFX0dTW+NYiDwbLaGaDQn/mc262RDgFW9TmRnJJIXlb5LNT9n3j6c7sop0RMGvMadRcLXbHes/tciO5qRgC2KC0v7Bo4O0GDwhzFp3pr9ZAGy/rDAmuu45rL1g9bX6BYvhoiLU9sBgiGeB5GWf1qWbMEjtillMtnzCLjlBCyqJU6pq5CcCiSEbya5iRMtN20O9B+nB8lDykP46yiVRViygAQvKvqKqUEgDGbyawqRUFHwdCU1yn4kp+54EUFVStedm9Xwfvg99tt3TaO0Mz2uwvv/e3d7b5tQXTO6dgP/TgR4nbTVt61TTPM84/v37VNQ879+Mffd93m1YtXd7cf97vdfnfhvD+ejuPQf/funQb/6tXb4+HxH//hb9M8/+qXv9psvzDi5usLW+jwGgBKwnQe1zSzLLlI8SAWxExKTV6mPNTUMZuirtt3yx94qpiXe1lo+biMJwJqWT9ugIrIT9j1spUFz65p
 dWUl2cgiWsYTnu12sXKiYDFBU1Xisk3oyX8aPmWOCzVkhYMWuYViOSXaLvSi8jA8H4enHGEtXlbDXtBiWIujMztpSTRxbW0iAFKhv5XsG5ac1QxBTMGgsMUwpTzFPIwppaRmpdSum6YJbs7qxFqgqqpTzqhKYCnb4TQ9nsac1TEpEhFLlj6mXIfgPRIP45xSyqkUFQtrwS/zzMhEazqiKaYk2QCaurrc7oCgaWoAPB5PhHh9eZEl398PouqcyyLBe+94itEHf+yH3/34fkr54vLCVC8urv7p9/8Yc7q8vLn7/FFNAen7H38AhJuLSzP75S9+WW13t18+f/jxu6ZpLi4uI8C7d38M7FzVtE2LzE23sWZjZt45Ji7WqaYxpUIzWLM+IEIClGWUv2x4fAq7Kc2LldiiGgyICFRi8AK7qqy3r9w/QeSnnK/cZVNb3wPiUy/2yb5K/H1WERdvVbykFZBVZdGkJVp7Rc9tbX0xfIrQa2pamKZPFVUhNNia/xTLfNYNwqdGkT0p6BWPWV5DdS2zSule0tmnrHRx6mbm5pjAICbJWbxnx6hmIjpPc+UdVt4MuLCO2c1JYlJTGcY55cyIGUBFDDCmOIxCCETglqkIK8a3MM8BHbF3jpmYmZ3LWaA0UQC9c9McY8oxRwyBs+Q0A8DlxRYMPn6+jTGG4AGw8h4BArHv6ix5irHThrxDw8vrm4+fP8U4v337zd/+/d+Nx9Nm2xnCOIw+BMt5f3n1+uVX96fTH7/7XV1XQz98+fLl17/5y5TTOM23/+7/4xG+evP2xatvDv5xv79SCYSYVyGgkuQtbgnRyrYcRFEEQJHs2CEgM+esxWmBKgEoMZqpChEXB1xIXmaiOWnO5D0RQ7loy7xlua1lcGjZdyo55ZSW8FeOBDwVFksNshbnZ1DIRBbB/GJuqrAKKp01vNbzsox+PquknhCf5VydE8dzYbg693ORtuTThmWOdAkBq+9dnhPxfC7O/twQw
 BTPTXkwxz7klMuZ8Y6JfEoiElU1paRgnjllnWOifhBTUTVVJuyampGkH89oHKEhUk55xsjOMaIZJpEygwtgIpJyxgLBSFYzx+ydA8CSpKac5yxM0k9T63zd1I/9+HgcmjrEnKecdm0nWaY4dV3TVvVp6KcYk1ntQ1XVHz59+u4P371+8+bd+x8Dgdbhy5fb4zDGlK4uLm5uXoSmiXP89PHd5Wbb1HWeY319w4C//aff37x49bd/8+9f3Vy7UP3dP/zdX/z6L+ebPlRN1bSIxMwhVLR0vJQdn+//Ot5UdAMyswOwLJJyWu9kyUORic20JJqw4FDFnSiqAvPKGIH1flrxj5qlCFLM0xjjbGoGhkaL5xFd8wAzKN3bpRhbK2wrKxFWwGyt/JDOQfrML1JYhOXxTBxerPVcSz0VW2cY9AwB4NnMivktaTTg+owAZ3r0E3awAvzFrksaUJ4QXdt1x+OxFGwpSXmrTCiCc8zTOHnuxjmmlLabpgqOmHxw6r2oVlVV1ZXkHLOM05xSVjUkMjBRyTmrCiGWJxVVAJQ455x8Dt45dIwOF5U8R66umIgJGawJoWKXY+rnSbKodE3dAKOqzpr7GBsfvHdJNac8TdPd4/HDw+nv/ubfv7i5dsGneQ7OHY+37z5+BsQ6eCYapylnvX+4y9MUmsbMfvbNt7vdxfc/fG+it18+OUQR+b//9//dz968vf386eOHHy8vrtpu02z3l9evgg/2VJlClAwAniBpzpKYHBhkSSK5jFUgErrFJhQFiZdweXYPhEhc8FPNCZDYLbtBwNCQSxFpalaYpGnOklU0a8Yy0KxqJuUtrcjmGoqX/yx3XmHtkRWzL4iCySo3QmVH0rMVXUuye0aFDADLiponn7omjKV0On89++vK+i5+8qy2eH6Y6TM4dUEfdHXYAADgXl5fmsg4TM4bO5UsKWY1IGZE6vtJRDdtzVVIKRECCQpReWNE2LWBsD710xxz8OgcI2LhmK
 YsVoI7MSFhSkW3PIuKzoBQOzZRNW2qGj1TFlhTEwLo5ylnGebIiI/9OKVc12Gapl23ud7vg3Mh+K5rs5lmOfXDu8+fVPLP3ryZ+z7PM+93v/j5z/e73e3D48PD4fv37yNY03af3v+432xjjI7dj+/fnaZ5mqdfffv1x0+fLvbb4+Hw4mJfef/+4wfHdDw8MuLNzQvLOV1eV1VjAN4FE02aHbEQZ82mqiClrBbNZ0wRlq6IIdHCMTc833qThVpK7CRHS8kQFRwVfsBaPpeSSCTHeVqTJVBTNCqJqYE9MZUBDHTF/FeqK6LpQrrGRfprjcugVlQQbdnDsVq4rdH8yaDU7Mw8xuVUrMZoAAAKdtYggLM12+pX7VlDy85w/pKYPund2E8M3PXDmLJ47wAw5xycY6JpmhHNOQKAnHWaEzOfevXBVz6ktLAumqZqKp/FQuVfXu+KUZ7GeZzigkQQEaGI0uK6nxcQsQ5BTeMsaujUERKATTE2REwcswxTFMlKdNbDceyOpz6myETe+Vc319f7C+f83f3dzX7XEL37/Onjx89dXX9NvO82X79544Nnoq+Cv/7q9b/727853h/2m+7icq+q+/32TVV9/DiMw+C9+/Nf/uJ3f/je1Mbh+Ob1W+/dH77/Pkve7XaHu89//MM/fvXV21DVdbclYnauabeDSJJYh2ZJrtZUzJ6xKwzQ+VBcqaiZChgsfA4AAEUm0lUxgKQUX7AgB6o5l7nNlauiALrkkiq2SM2fsaaFS28ABEuHfXGLRYV2ibrP5RdtdbdwLnpsIcoAgKEaPBU9qGC0ppJrNQ7FjBfuz3OEoER5BSRAfXLpq9z4OUeys5ozrP6/fAw3p5QlL5mAihKGyiNBirnsPDJQIvRZ6qbyvmLnppjnrJ4UoSrLgOq6Ct6PU/z45Z6Jm7qOqdRGwMw5C1iZ6CU1LZ5D1KYYg/dokHJCRF9z27ZVFdqqhiW82ThKzoominjqh1I4b7t2s+k
 IoAr+5YuXMaVtW2fVx+NxmqY0zxcXF/v9TsGY+XK/u7m6urq8fv/p83g4jeOYyvQSIjPfPd4fjkdT/dW33xCiiDw8Hl6/vLm82H+5vWWmHz/effN2bJoBTaahlxQPD18Acbe9lJzqpkOirIKIiIxUGCRWEr6zUKgVEQQELHPlCFDUhnSBFYtmTvFJarks1pWcJGfNKeU0p0UxGBFVwExtAbkWrpGB4TKVtqCturi/kmguosVrP9UWSsC5vwolSVzWKZg+ddLOmWxxo2gA9KxFCSuqD2e/f3aF51T1qWlUMktcIfw1xqyyPKtnXio+QNc2NQMM3vXjpKpgSgjOOREtA8kIBGbB0aapxznOMYJBTjmBNXVu6ooQx5jnmEUtpkxmITCgiZbN2yriRAUkkyoYnrnXMSU1YyQQUJGYU1a5dpel7eoc++D6EdQ0CUA0MCOm4EIUeXg4pG13HMemP+23G2badJv9fv/u/TsQ9d41VfDMMUUD3HQbQ3q4v/3zX/383afPn7/cEyI693A4uo+fReTmcj/N0+Px0NYhuMu6rgqmO07zpm23bX08Pswxtu0WEKrgRfI8nZCpblrvAwCknAByQXKLTFcp1WHp1WBeEH7IOcOKqpSVZ7jeYBWFs+tVlRjnNOcUc86Sc85pwXdK+DRd88mSZS4pxNpKOtfpdvZYxZ2XDNcWMPQpzyy1O61Li5dAfi5h1r75Cvg/83bnBy4fakl8n2eoZ7D2J+XQ+hJrqHkq7lf/C65t6qaqnHdlGKJoVDChGaQskkU0xRQBAMmFyoNqzFlzruoqqw1z9s4R4ek0MHNThxRT2SxjANbYFLzcH2QWWMDAp/kVMyNE9AgAIoKqWVQVLrab7abz7Jhy5TgwE7NjIoBd1znnHk9HMUCDz59vD8fDVzcv6rqAtm6/2e67TkRPp36co3fsvdteXH1490PbNiGEKc4Xm+7hcDwceyQCkctNV4fq/vEQvM+iZvrh05eu3f7+
 D98/PD4iwHd//GHT1k3TgGmOU1XtiB0Ano6HZnvhfAVYHKLmZM75pde5msqSc6qmnBaitCgYIDGVsXNVMy3cZFAoSnwqZTvIlGJUEQNVlbLg+sl4ylc5AWujG88LiwtOuVZPz1ua6xDdancGRsumw2clzLNhkTPiDmiwHBJYnxEBlzO5ErWW/AFxwUefg/p4zjaXRPWMiRWaCK4VfxnIdwgYKj/GOQRP2MwTD+M8z7OqFsQPzAiJmFJKpVt1GmdTBaS6aVmAHAFQzNliQiLvHSMGRwaYxLIXcmyTCZxR3+X9qoElZRVmt2areZhmDJ68I+eAqKqqrqouLnZJpAm+Cv7Uj4YbSXI8nk6E33792rHLOX/6/JmdQ4DPd/fjODKzAjim3WbXHw/7rmubGsF2m/bvf/u7vp9ELTgHYA+Hx3mevPfH03BztVcVIgJNc5xVJSV9eDyo6X5/pap395+nmK8uL8dpHMbeOc7TWLcbKrJTgCKZ2BXRKGIHBqpZl9TTJGV2zN5pXsY1i12W6geITKRMHeWcUoqSs4pITstocOGWPsGEz4J8ufOEQOuylzNgeW7hrKZ3JhQtRlh8KoIZnFuQ5ycv40IE+ATWL65vVRhbSiQ711SI+IT72/qEBbh66tsvD17fJpgV7VIDWPcuA7i7h8cXl7vah8izCSPzdtMh4jQnYfHgiLmqAiL1w9SDOWYfvCrPcybTTVsz0ZBSXdcxikj2zJuuEZFhnKY5TXNyzjFRybtyVitI4HI+IYuAgXMumxkii8qcBjftd7vdtkveXe33b1++OI6Dc3y13zl2D4fT6dRHScfTMCb5cHurWU7zlHPe1LUB5pya4Ku6mlM69KcUZxWbU2yrepymLNpt2kbyxXYzTdP9wwEM1OzXv/r5m5fXovrjhw/9ODimnOTL3UPK8g3xZnfx+eH++x/et93xuh+OD/fj0P8qi4lImhGZiFyoyHtyys6LZsrJl12jOSOAqgCgqjIW7
 8kGqqLE3gBVcrFByTmLzNMoOYmKgiosY/gLYgMLfWNJDkzXOgNFtRTrstAAFoB2NaMFzCle8Mx2IwDQ83DSmm/YatxmaKBriVtwAFh75etbOhf7TzATLm7IbBlBO9t9yS8AziO25av0QRZ/utRPrj/1cZqK0qepwhzZcagqdkNNVU650CXnecpZidk7V6ux9yr57uHYT9FUg3dVFarAwyRZVdSISAEAiZkZ0TmHiIVejkWNSu3MNCvnuPTysmQ1lSyaUl2F0DZVFbKKY3bEaMjE2+2mbRvN8vv5h9M4xml+PB2naUazT6qO+XK/e3l9dbXfV1X17tOnH959GMZxGqcpxs1mc3Ox7do6xnT38LjtWlV7PPX/8i/+7Nc//yZJ/nJ39/s/vMtqwXERsDud+v/4u9+/fvPNTHA3zH/8dPtmnOd++N0//f7h0P+Lf/GXu92pcd774EPl62azu1yKJMkSah+qMqFR3F4pPBGBijwTqJosBZNIQftTimqaJUtKpcu5AqIFk8en0LuEzadypfjFUjTj+ohFDRPXqLo6UzrDkKuV6FK2r89cCCCLn1vLdizmpE91efG0uKpEPOWWa1qwUpFLurxWTctvnmcVzxZ+fofucOwRkZ3bbjehrjeAMaWYkndeJINBzjmmVAZ3QEUEY85d8ORDyhJPvWNOIkDUNaEKDgBENKoycxVIRZBwKTOXiqFcbQAQQkQiJSRgIlRRRqir0FbVNM0GVoUw0BgcAeAUo6l556qqIsLD6ThM09QP4zAeTv0cIwIQU11Vl7utqX748tkUvOP9pr29uz+eeud407XHcVK1u8cjItZVYObf/Opn37x9lVX7/jSOU9s20zSpalWHn3/z1ePhlFL+7rt/+vWv/+z1zXXvXR5Or66vQWZCPTw+quTBeQS4vLpxOTG7qttKnGGdDXI+lFOoJmBYpmZIlZgRwETLP5JzyjHnJJI155K72iKV94SFLzQ5WPpOVhL6c1xagR
 xcCpW1Vl6yghUtX/PjJ/LA8s3FQa6Gu/yIFnMtAfgpjUQAKmqMT4CnmgEtT7u8UTsH8lIuPWvCrVIrT19rgQ8A4C4udmYmIsd+qELYdM2ph6oK3rmc8zQnG0a30BwppVy0FOdpZudCVQXvCE2TpJTDflNVVYzJzBoXqiocjv04jEzkvXPOaZaY5QlaKDnVgtkSoTMEQyTmdrfpp/n21Hsam354PBx98HVdBWIi6odhjvHj7e2HT1/iNCPAZtN12K2QDc4x/fHDx92m27TtOM9meLHfFRIwGXji+2M/TXPbNsMw5ZxSSh8+fd51fcy5CuHNq+vjqf98+2gizru//LOfuVCJ5q3XVz//um1+8/B4C0D7XdvUVdd1VdUBkqo2XRdCVS6tr2qRhEhZMwiBmWp+DkOrgmVcrAELyJdzjDFFyVlywem0cMll1YkodcxSzdjzruMTxoxr4vjMFa1hdGXCwTlJffKAK3wL51D/pDuiazuyhLtzeSSqRnp2eM/B1vIiq09VtIXpjE/nxOA55o9wttq1HWauqsIcU1bLcwQDFQak66urOSbJqZqjqmAi7xiJYZxySmXpAswREL1z45xEJGdhpqqqADDnxIRzyqdhGueUk4TgHbsZ4notjBANsaDRSzWmQkQG1o/jdd7dXOxzzsfHw+HY18G/uNzXIUwx5SzjPKmB937bNBLCzdXFVy9fxpSmeQ5VuH849McTAoro4XgChMv97mLXfazC7eHYz1OcYxX8zdVFP8XPd/c5JQEL3n/4+GXTdduuIYK2DteXm4vt5jRM96fx5rq52F82bRecC1Xz6tXbaRpD1RFR1zXeVz7UWYS8D3UDCCKJfe1DU6aZJMUzplNOJROtJfJiZCnFeZ5yTinOWfLC7Suw8XqMAQHLHCkUKchFXn7BiZ5F1hX3gZWuX6x3VWWCBQJavO0y6AwGIEs9DmcjO2eZZ5s7HzMzW6ux5bsrkADnR5UibMlxVdfE5KliKz+
 n5+gVLpwqBXBfvtwX6K4KgRDLCJvkNKecY0pZnQ/OOWYGM8mLuLo6ZmIQ7U89EGYRJsxZ5zQwYfC+9IHGOYpoFoljTHOUp+0CS9auxXmUTTMZkAwAR+2/Tz++fHndOj8hSZ7NLGdFxNMwHIdBTV9eXKRpnqa5qcOc8vtPnzdt5Z0jtcttGxinOToCIqqrSrMchzGrff3VKwNQyXUIj4c+xlxX4WGacpYvtw93D4df/2JTRMlF01cvLt+++urz7e39f/zu8XBwRDlLyrcvbubLy6uUzYUgpmIIgM75qm7jPKZ5qrqtgcV5cL5iZjPIKSKUFbumqpqTEbNzCGgGOc4xznGeYpxFpFTutvjNglCVuLoMnVpxoIqIQLROL2HZ/XkGbQyfxM/PsXaxicV+Vj+p68rG59+FMxp1xrJWu7GnX129/9k/r8H8jGqd04EltVv94zn9XTz36pLPlo2lSLp/OLRNBUhZlByLmpo6orZpBrOqot2uVbWc0jRH0QBgCuYBJEvOqeQ+KiohnPpxTtkxbdraOY4xoYGYzTHGORaWl5oRoq3Y7FNubgaqiECqHjGLHA595R2IJJE5JT+ObgieMHjHzJu2Gab58mL/9tWN815UxxjfffycYu7aZk4JweZ5ZuK80cv9znl3Xe03m1ZE+3749OXu4dg7x8H7m+vLeYrvjp/3280U09bo4Xh88+r6+upaALJq3VSmmtIc4yhG//j77/D7P75586aqq+DcjDEAzIC82YWmi8OJ5yk0XZY5zgORK+PzKmIay9U3tSxJRdh5IATElOI4DnGeYs7FenC5m1qwQTUjpNV/FUcMJfIyGhHpMhp65rydgaTVONdBElsodcsD1yQBV+7egkQ+C/5P7lCfynA4t9SfQU0FSyhHyM5euCQNYmAL4HD+3urjn3lcRCwCz+XzuJzzMFNTV4QUY1K1ENz+4qJtG2Ym1N1mk7I8HI7kfFXXh8MRwBLxbHNK2cxMFh6dqG27
 pm6qlPIwziK5TNHTAh0rITgmM1DVYqnlzZZ1RmoGZRmI6q7pri8vMthpmmIWFT0+HEikaZqL7Wbb1Z65q6td11R1NU2zmKWY2rpJHO8eHvt+KNhKVVf7lKtQpZSHND8cjllknOZQ+b/+q18Tu3fv3t3dH+aUuqaOKZ/6sR+GaU43lxfHfvhyf59S8j7c3h8u99s5SYrx4XD6/t2H+8fji5c3F5vN9eVlcEFyStPgQx1cmKcBCR25lEQtRxNids6X1vpCvlvmmRSZUoyScwkvjssoMzAZAIGKSjYreBMCMpArK1TtfEtLHFomTc7R3tbYuth7yQfWNvqS9i1i5uXYlEJ29R3FVlbyGwKYLK5x0T5eQACzsoDG1mhNhnbuRuFakRlgoRCUiqPwVdYhu5KcnNNoOD8MwJlZnEYH5p0rWgSqELxXEQCLMQ/j1DT1brOZY5rnOE2zmbJXAxQdC26SJZ81VVSUEERV1VKSmFIxU1szKaRS1Zdc6Xy9lgyKCrFScn/q1UxyNsmaZeyzY2q69hTj4dOn2/uqCt4xA+ChH+7uHx4eDzkmBGDvysXt2padG6f5+z/+yLxsjClMqz//5c9evbh+9+HjtutE9OXNRds0x37YbjY5p4sdiOT3Hz7fH45NXTFxCH6Omcl9fHy4eziM0/xPv/99jLHfbhxgW7fBeVVNcSq8u3kYoKpNNUsyU5mn4AM7j4CGRkhnEDFN4zT2c5wJlIJDMC4SlQAm2cSySrQsksyQ2NkSuBdsbknxCtKJiMXLrpno+bqW0snW4I2rB3sGkp9L7OdpACwgFyCu436rokLJ0Jb/eSZsuy5OXvnNuAjrruG7hHRd84zFwJfS6uyKz87LYUGFUqpyruttacE/HE+OKcZ0PJ2mOXYxHvspptzWoW3rUHlHNIzzPcI0zRqFkGLK0zhpzv2i407OOchyrtPVQAxMDVTWDKN8FDhf2uINRDVOcYDBED1yW9UnHUz0dOoR4fLio
 tk0fYxzSpumxhHuHx4fjsftpiuicG3bjvNUlj/d3d3NKRFi0zRmqmKIuOmaH9+9/3J/3zVN1zZNU7+8vnCOQ1Vd7PZm+k/ffTdHff/5lgjfvLz2jkNgJP/4eArOgxoZSMy3n28t57ap9/ttXQWwMig0W8k0cyRya3RGydHMnHOEfHZRojnGKaVZJRWGPhESIROqKJhoTpKipBlVaQGmzFZl01L02KIYa7zQ5sH0+TDqkvHbOadcSylEOI/4LsTNNXdcy6IVxcSlsb7a8FLvrdXQalII5yUjT8nkGX1aC6sSNLW8h7WHUI4BPU9YAQDRVVUoW89MlUodENPj4fjier9p64fjaZrjbtshoedl5crFdlN5R3g8nbyKmBo7z46z5Kyy6dqmrQlJ1UQtx5whQ1EgkDPndb1F5Qroyj1bpNUWzAERxKSufExuzDPEeDyail7InoJ37ACwH+fb+/vdZvOLb79x3h37/vHhMAzTaRglppwTMHn2cV42iLJ3SPTjx89fv35FiA+P8a//6s8v99txnrfdtm0aM3375qthmP/ww485yeHYbzdtXfm23d09HKYYv3r1IqbkGLfbdretm9qlFEXEOVs6OqXmMwU2QCgk7JJpmZloJnJLLqdqKqYCmkspC8CgIKo5JU3TPPXTNKlmACAkYkNEIzV0WhpAuojIF20INSurpwDPFQwUJfZSbi9bFQCWSgB04evgYnDnGuicGCxhzbQoQeMZJTzDorbknraIvwMtNXjJMBcm6Yqwltuu5ZlLfxSXE2Nr6xPOR8AhQJG9iCk9PByy6ByjY3e931Dt6+CHcZpj3jTN3f3jaRhNtR/GTdfKwrsBZhLRnLKIFNLUpmvbtp3mKKLa1OWNapaMshSaz5gLiFhS0sL/MUSYiySONW0NhQyKVD5MznI89dM0b7q2auq7xwfLUp7l3fuPx74fpinO8zRFyQkVyBECDFNvIuxcCKG28Jjz61cvguMvdw83lxfbrj
 v2PRiEEMTADC4url9c0bsPH/7w/buPn28/fbl7/dVN225fXl89+Idv3ry5utyd+tN+t7u62O/3F03TMTM8u7trI7d4NTAs1Fheot7iXiTlmNI8T4PmCCCBUAkFUFVTitM8zTGKKpqAWRlkgqzeBSVRK7RxBMDiYpTIMS1usHBSy0/hp2o2WCLVApESLCZki7jf01dxgGfwH5ah5CUZhRV4R4RC7afVny5eeWXzrYo9oAhPNCszxDO7dC29znDX0ogwV1c+JhymGLOcTj0RBe+cd5/vDs6dALAKYbE9xMq7LGoG/TAhQlNXkiXGJBJTSipCRNM0/uEPP4QqOO+rKtQhVI4fAFJKwXyClMSYyQdvBiK5iLMvufQyLqiSNKt657pNpyLOcVkYXCj6GezUD8McvePgXRXCMExfhrv+1BfB0VIpm5klAwAi8t674Nn7Kcab66vLy4vH43HbNt5zP/QAVldVERA9nfrNZkfO/ebPfqkqp+Px05f7FGOM88vr/X7bbLfbFzfXx9Oprpu6akJVBx8KVAfPv5aKBRdqE5GBETECIaKq5JzSNE7DaRgOKBlNlAwMRHLKaY5pSsuwHq2VSMlCk0cgBWQDVDVEcs45ZkQUNUYzKFL1CGcwfJlORrVi17hCAmcbKpXKgrgv9m2ABFqmSZ+sfCnRC7BQyCZlBmABws7wPuAa4RGL7hWU8aiSDyMY8FJAL2+3GPg5IzAAZ0jEzjlNKY0pEbNqBYBjnszMOb/fdo758XAcx8kH75xjxL4fYozOORd8VnXqYozFf8SYRWWc5q5r99sNIk0pOuamaUacsxqD1HW13+1EpO/7mBIIruSpBYBDNTM5HY5FWGae5mUAHEALImWJRXLE6H2Y0xzjNE3lUBZ421YADgm9dyEE7zlLQqS2ae4eHtGgl3Ga59cvrq+vLh+Px4uL6+1mezjczfOYstvvL3/182/jNLbN+8dTXyCI3c2NITd123Z7ACPi0ujLOS8
 3dVn+fTbLsscBl5mPp7TNUpofD7e3t58kjpAjqjgw1JzFprJ52kAAkZgJiR2wA3Jski0hKnPhvSPzqrhZcgRaQCiiMv1WOA+LEdDqHBcLWRuU5dqKLFjAcsitNIBWsGhFjojwJ2b0VKcvhlf+FITnjCzo2u4ssZtwmUQ/Aw8r/Ln2HswAwJUqpq4bQxz7QWOa5rnsCTbCbdfmXMUYqyqI6BzjNEUmSCnnnLNoXdelSxRzjlmKClfpMLHjKaby5bgMLmdiUtOU8zhOTVtvNpspzillSWmeY2k5AJGYMVoWmGNk50SfWhBqCgKFCI2ISXJCLnvEbf0qRo4AzAxExFRXfo5pnueqqt5/+Nh1m66p1ezt6xfMZAbOeSJMKW67/eF0PPWfr69e7PeXqe1Cu/ub//C3265pmrqqKnIVs/fOWyljVwlZK3xXeMJ3ibjIJgMRLNW3AnAJ+ExODWOKp4fHNE8oCSRbFgETRSAUIGXnmBw7YGVnzgGgslNmR2zeO2ZHVNZSo5qgQtn6DQaiBgDFDkqeVGyjEJ0WV1lo/kvpsoA+a7fp3M2Bc0JWCBVl+zzC8wnntY+12nGxxMX4DIpYry47P+BcTj0dl3VZyvmZiz92m+1mGMbgXV373abtx2nsRzNj79i5rmsN8dOX+xBcCKHt2hxjylKchKj1w1BOmveubZuUUk4JkIo+o5l1bTPOpKJoggjBOUJUUSYMzicUyskxo0HOIks9alBwXVRMqUiAr5XHCqcgEhMTmYGUr6KRuyLNpcvimJ3juqpSSnNKRpSLDDT7fhic4/1u808/fLi+GF6/vL5/uD/54367HafheDzst1vv64fD8XJ/9c3bt1Vdh1BhMS0kMyNiRDaRdTYI1y8oeQUgItHTfYClE5NklhTH8SQiTdXmNinQPPQx55yWvrshKRqiOmbvzcgoWxUMiUmNSJ0zREMEpmWrYVnnp1IolUtqsTCTDEUXUJwJrbQ0AQqdTBdjWqr2
 pUBZYKIzBL/8t0D/+GRb68+xwE1rG+n8W2t/vpwJwnPjdkVs16przd/h7IZNzeWcg3dNXTvPAKTwOI0TiAXvEanw5It8vZltKs9NdewHxOrUj9M05Tkuu7+c224753x/PB1Px5QTR8eurFxlJhYil5JlSWZECITTVFSbivSIOeaVPLMc3DKbz6p1FRAwznNSQUDHjr3zzi3D+yKmiiXpXvsiZZ0XM4vY8dQjknO8CF0RH09HU+u23cfPXx6Pp6EfwaypQxZ5+/p1XTUA+PhwD8j9NKvq26/eYMkg2RGxAZZhYCyrjvGcQS0V3QKcreopiIiFSwyQ8jScDsfj4zCcSvkfQiVSpG/IKKIsGm4lAOecUxbDDMijS84xO0fEwYeUXF1VWgVRVRe8d6svt3NvxgAJQeFp9ZGuE54GZdoTABbyUSlnVxTgGTj67IytBrloTZbwX6buyaiwLM4oTXmiUi+qAZ/j+YKDL667nOpl09cZDTVTMDcM4+V+d3mxH6bp0I8AEOp6HkYRIbY8ZzFlwipUwPzw8Ng2NSzPguycD5aHcZqmk0o1VdeXl+2my6ZxmmKKNFMIgYl16d4zB984NjNQjbnMziuY5YV3boBIRCKLoiQgECIQIyAHj+rK+u4SrpjZQJkpxlTaWk/HVoXJkXOBnUimBc5TNJCUsgp7j4D39w+FVgIAbVNnydOU/uLPfl5X9T/8/vsXVxddt5Wc5mm4uHyhgGZISLZQLSyLQEkkzplTwcdwEbl7MtBS7ZqJ5HEaHw8P0zRIWnOglGLMomjkAIyxiJRp0dMp4AgSZNGYiDiFEJiJjRVMDQiZCEWtnINyBdbeh5mZ5nOfU3Hl4S4AT0HG1ppmgTmXuaU1Pq9/oXNnaHGm628uHxlMQdath7ikp0tqscKquHbrF+9Lq/Nex51/0iZ1bV23TV2Gg2vvQURCdoRV8N4HVRnH6XTqcxqqugohlAaQijBzW9fZ+5Qz5wyIaY6fPn6qq
 soHj8SQc8556HvnPRGBllm2SsxATSTHlJJkWxp5Aqti2RIW194SMzsiAUMjzwucR0TddrPpmqEfHh+PAKl8BFyPLQAyc9e1Taicd4QQY4opDf2QUyZEx06ylCGklNIwTqqKCN/98V0InhC+3B1ykr/6ixdd10hOMcW2u4BlldGCv5TUU02Zym5EMCvq+7S60dU4kYrcvHceF5AX1ExMk+gU8xSzKx0OAgUC5lLiOEQHAMSGTMSAWOAR74N3ntGZgUpWAiQnsDQwF3LQk5kiFEp4wSuhvJt1+LiEeIIiTrhy6Qvw9CdPhavBLvG4YOjwfBhkSQ7OOaYVrAuBVO1MeVk9CRhaliUxXds1CwiFAO50Og3jVFWhpGuA4IMn5pxzSqMPjp1j5yRLjFFVYyQEqOtAaCKSVcvAmkiWLPM8j+M0znPZRpTmmHN2zjFSUSZiJkRi76qqUgBOCcg0ZStjuGC2rLJ6ymxKsKtCUK8i6gCRuW6qtmmc91VTXyD1Y5jXKr74g65rry8v2bGqbruOGdVgHGfv3IFOOUuZKwcDEzsdTz74/W47x/jp9g4Ad9tOlU7DPE39zc01AJqh5BxCXZCgM+RpsHTY0bC4f1ySv2XestxGJFpq0zINl6OUoWRywQOYOWZYKjwzMzBrVKSUzwAA66aaEh/VRCQCZBEimh3XIYQQmBlpWaRbxgFWi1m9KqyHo8hKWonVpcO8QHLPQvozJVsAU80ATFTCMaxaOgvsClC6LeXt4dIseHqGwlx7KreeJHDO2Wtpi0KR/St+1h1PAxOlVNd1oIR1XbdN/Xg4ZREVTSKIGOqKgKZpHMdpnmd23OWmaZrC09l2Tds2p364u39ERPbujOIyMTKpyBCnsv7HETtmcux2O+9YqjpzFoNxGJd+nFkWwTUyAqD3vttunPcxxpyFwAxMVU/9oCervKur0HQtgKFaKZUMoGqqTdc1oRrGQVRR6auXNynLp+C6bXc69sMwlfbPlN
 Lh8egrH95+dRrldOqnYazrutl0VxcX7z9+2e8urm++YhcQIOdYIratze51P5CtIX5B6UusffKhBoAgItM8HE/HYZhMs2d2zokCsqKCSjZAIvKOiYhpDdCqWXROOueckqSkZaifHXt23vlQOQAEJKfmPAIgE6rqUtEXGUe0M8t9yYJWMoQuNCV8yl9XJmTxEPhk5nYmSZfIvSp2L2nl2fqWMH6ePF4h0jUleMKVCjRTsl4pqmx6dtfgYoyFPDuNIzG/uHZNU1dVDYDGOsfEjJcX+7ZppmmepvH+/nEch5zySQfnuKlrBX/sx9MwZlVyblNVOeXCCmXvETHlPI5jnOeUEiLmlDWlTdddXF5cEt3dP07TjFjU/J8CQ/nQzOyrOlRB1nwsiw7zlFJm4q5ti9txhJuuDcxmFkplxuyD2293zNdgNs1z1zRf7u69921dN3UlYo+Ph3mamCnGrFkOh1OOiRD7YUw5i9nrVy+ato0pppR8aFyockoqWVWLCvOStC2hcyVglJoDCYmw/FmRxSxpmqZpGud50pwyEzMjgGdCY2VCAMfkHTMioRGqqmQpUgk2RxmneZpTUkMi77hyvmsbH7rF+NY3Y7bARes0CBVDNANaS8lzwrrkhmsyivTUOoJnDm+trZfwVip9Lfsd1rGTUhKVfSlECGtdf75Q53xjQbbsHNwXX6uLR14AcWeqQCg5IyEjz3HGExX1wGma1KzyjWQZ57mq3HZ7fXmxu729/3x7P/QDmE1hqurKee+8++rFjfcOAI7H0+HYRxEHgER1CLvNJub0+fZuGkdVaZrGAMZ+IMdF/kpXbwRqsAKZzByqwAj9sTfQlCTFGQAIIDCHqvKeYVHzQDLwTN57JCoMnKaqEGyepqap37x6iQgpp5xzjOnFzY3m7BnnubnYb6c5fv5y//n2vqqqEKpxnlXUIXaBX7140XU7YlfKSheCZsqQACBLWsH5pewsi2myZAfOsXtyKsVPIBCiqmU
 RFUmiKWVCgDIpgcBEROgIVvKxJs1lNE2WYZCUYoxziiKGaHW9aZqubdqm9j5474nZMRNS8VNJFBEcczG9BWrEMsX5NDEHS3WyVtxPE55WcgpbqylY4UnEFapcLLeUtlIaEfQshVgvzrn0WZ5HVns/5xKm5b6X5sGyZcVpFkV0zvu6IqTDcbh/OJYyuSBH3jkwSbMeHw/eu8uL3cuXN8j88cPnaZrmGHNKoQreezSrm1pEnONQBTUlYjAz1aqubjZXSPTly21/OhV4KIswYt00oQrjOK54w/IxHFHwftO2zrlhGKdhLBVVoTUG7x1RkUuAnCvvqxBCCNM0IlLXdV3XMNE0z/Mc1bRrm65tX1xfVp6dr6qq+nJ35xjJuYvd7uHxUNfh9vYRifb7bZmrRDQi9M6FqsaV/6U5mRkzgREbq503Fa29uaVaV5HM4IAYgGDBtJGQQwhV8EsdgyQqKcWUpTgSzxgdES40ZQQtLKEi4EpgnjARAFBVN9eX+6vLy65r6yoQO8eMS0wH0Gc7txft+iVNVls2/gkAAwAUKjSoWqmTnhzbiggsQX/NX5dLYWZarH11jUimKmvgXrzgs448IJTXWoGwdRh55Vkv4JSBrWssXCpnkWIScd4557qma5tWVUSyrZV1TGmcZxU5nfrdbnt9dQFmX77cl0gXU5pjmubYzrEKgZkckyn64FW0H8cxxg+fPhFQ1zRFZX2aJnbOORdUDWzZBFj6WgAIRs6Ftokip2FIMWph7gEQEhAZQFKriGrvfPBqdup7s4UC1o+jqrZNNc+RiB4PJwO4vkhVVbdtJ6qq0g+9qe27jhGPp1PlfV1X7Fzwvu8Hx1RVNTuva1Ve9GIRUFUk5wWct+X2LJo0ZT0Nu2X5UWksFXkDREQUyVkkppyyppyhCCESlQEFM9UEEyzlFyM4QiYEYkBgIiarA3vfuarebrcXu31T184xlm7S2iQoyh9rzAcRMUJCgpVTVMBzpsXmVBUB
 DQt+hmuGWlJDKDXeikYt7nRtiJRyTp87VFxnoJ8wPztDK2CmgMtIdHlWXeP+uV+1Ml4BAEprfUlmEdAxe+9TSkmSiuYspfh3TMGxmn78fHc69TfXV6pWVpoSczkxkvPpeBqY6rpGIiaqEZHRAIa+Tyk5dtc3N7s9jcMIZkRUhVCc5ZKp2DLwh0xMbGJjnHLZnarGRIwEjN57JvSePfESIAlNCxk5jcNoiNeXFzFGJAzB13Wlau8/fqmCB4C2qXbbXV3V9/ePP7z/xzcvr8tNuNhtx2l++9WrLHkcxhB8FSrnvAEsx7WEJDWRnKVMXUqpLXipaNAWZSxaoPD1sq9RTCTnQn5bq3oDQueY0HK24hRK+1TAMqJzzKxIXHzUZts1TVe3XdNsmNgARK103WHFtmzx6LYAQFjE5stWkKWTtLxyGRp+FnlXZZ3nRmaotLI/zUpEWEqcxSjPqNDaKQV4srznF2F9agPAkr+u8PCaBz+lHWYG4JquNTMmBgQkjCmZQRUCGJhomTrJEhM+QQ6H0zBMMyM758ocZhWCimYRx7zdds7xMEXnnJg4dteXF3NM4zSaQRUYknG3IQTHTEybrgXV27uHRcC/pDvEQCR54d2pmiOq6trMFDQE31Y1EBWjAUQ2E8RDjNM8qSgifkiprarNtvPeV1U1DFPOWQ3meXbOp5y7pn5xsZWcfv+HH0vNXIUQk1xc7HebTYqxbaqqqgpSAQCqYklK/00k55xNZalGS/FROptES9JmRcjWFJQIkDCnPMd5mKYpJjNzXGyFAMAxxag5WUqiIqBSArT3DsuoFmIIoa6but003ab0uuaUSyuOEN2yFIWR2GwVU16rt5I5rdUPIQCcx0oLE6kc86LcXswUccF6V31aW0eS1gpqrcbObSsoYuWGAMtQgC2WVox1IR8uQrmrehmeSSa2NEp1ZUUBOO89GjjnRBUQnOOyAfviYi8q93f30zQXvFBVJWfvXQh+nqQAwq6cf
 TAlc94XHlOZD4wpxZybGqsQ1Gn2vq1rAGDm64vtHNPd4+M0joExeD53BQ2AiOq2qdqNSpJBHTtw3rFDQgKoHTNREiEzds57n2IcYoICkZgCuyx5enione+H7fHUI5GpXWy3vGmrur7Y7w6nU07p4eGh6DOOczycTkm0qcLxeCTiU9+XzNI7t6ZlRXRDRUUkiyQwO7c0ccXt1wTLcBm2XMBtUxXN8zyfTofj6TBNk2fk4IJjIlQmBJ3mFMVkTmaKCHUVgiNfusXOhaqu267bdCE0YDjOsZ/GApYRQ+V929SVeS5COrbgU2fnhYigi8hCSaiLuxUtK/LQAAqlgdcGREHH1kFkNLBz1X8GgYvpPqt2llC8+kpb6qTzFuc1A3jiJp6FzVY28xktRURXVVXOuWkrIhrG2cxc8Ju23W46M/XO9f1wOBxTSrTsldK+H4vYFTOHEABBshhYVdXsuOxUMFNGaOo6eBdTTDHWwVd11TYVAHZN049TTDFOU8r58nKPf/zRkhaYm733oUIiSUrO1XUNZR5a1FSkrNIgMoQ4p2kcUxYog9VgubSuRQzAN/T4eHg8nqrgzSzn/On+7qub67sqTCldbDdTnNnxy1c3CDCneH9/qCp/fbkfpth1m03XFEJCqc2hcJeWFa6GAEVMmcjR+U6t0anYZXERJWcpGfYwDqfTqe+HYRiDI4SKqWJiJqq9l7Y2kQkkpVwElACM0AJD8BQCegZSmcdhmPNpGMseFeewCgwMmjkvK0aX3cl/stoLDASFkBQAlzZBUc43QaRnB2ydVILzx1JVe95jx2Wnk1mpulYlvYXdaQBl6mqZiDz3mFafuuSwazBfsodSjZ1fBQzcfr83zd65lAXnhACXu+1u2wGBGXnvmqYuxA4wkJTGcSqfIaakoqYaqsCOmcgxVcGbOTFIMZqoGey23a5rDsNoprvtlomHYTicTjnn3abbb1tVu9xvf/bN19//4YcSetA5ruumaUNd5Z
 QgSxZhIMeWhQpqLSlLzlb2GqkhAJbtOUspBT6EJMoEdVWpKjGdTifn3f3D4fb+4c2rF3cim7bedt0U0+Vuy4QpppRyU1ebTVdVvGmb7bYtTQFmlsJaAss5F+9C5MyMaFkqe3Yr63V/lk2BmRbZkDnnWPRw1HSOyfFCzchqCOYYPXPZzEmgKiqE6ha37fM85jRFvevjnLIVAqggKpiKqqhktXPBsyy3KD6Scdl/XogjxfEr0ULFQxJEJHRET+gYPstDYdEQXz0gPCeL2ApqrqAbFDirLIFeGSnLA21F9Q20KE8U1kXpmp4helMDAueZfN2WOayri13BgcfpbrvtLi/2L64vd5sup/THD59jTKfj4fGRzCyLDv2gqnUd6roqSbMP3gCHfkgpzTGpaE65rcNXLy598HeHUxVC27bf//h+ntNm23775qZr24fD8eF4MoC6rlJMyNy0zdXl5W6z7cfx8Pg4xZTjrCKjiuZsa5timV5acnIzBTBBRCRkYgQUVXIMBC+vb64v9uW6DOPonDOAcY5EtNu4oR+2XUvsri53wzAZyPXl1W7bimRmMtUsUc0hoKioSokeJcU692bWlsgaAZc7eq6SDABSzvM898OUUkYERkCElHIpTHJKKSXJsuKV5lBBxRQkGZkKaB/jGGXKEAUNzDnHTLVDLmurwUTVs8Kic3jev2oAoFj2WayZcoEtzRCBkUqWSEAGkBHLB3ySOT1bGBaCvRGCyaq2gFhWJC4Z5EqJtDUrhVU3qlyeJROFFYGCFTpdW6ELpI8Iqm4YBvY+54ymTVNf7rfTFE/DAADOuU1bi+r98dTUgRGOJ+y6lhCO/dDUAQm994u5gOWcAVBUc1aE5Soc+/Hv/uN3dRUuL3ei+eFw3Gy64/ET4+by8qLttnMW5/zt7f2CZRC2TXNzdRXqKkp2jst1AijZn5oIwJPw3zIys+xEI1vYT+g8Bx9KzgAAPvhffPPm8dSf+rEKXlVD8Mdj/+7
 D59Mw1nVV1/XjYSCCUz+1zenm6pLYqaGa5pwQpOjDSE5lWXwxvTJBIWKAsJZTCE9Ts08VLgOC6uF0fHh8HMbRMXpytmx8N1NNKaeSAjJ7RgITQ2L0rmxN0WGYslq2wv6h4F1VcVfXPnjvGNk5dmKoBkxlXLgEFF1bkWVr3mo6q6vLGYAXGMzMSnOfzr0HW7n3sJZ9z5qU8LSgttjb8mC1Yr4L/LomrE8Zgy1lOwAA4wLJLWnnuco3AwB3d/+w6TZ1UyFSFu28u9l27dCkGA/HU6mKzFRFiHC36fp+VFMfqpJalFXe7JiZ4xxjEgRyjpwLRNS0bRHo+nz7cDgNL24uQqhev3q57br7x+P7D59fvLBt12mjl/vtl89f2LHfbC6/enV1c01IKaY4TvMc8zyLyOIP1kjxFDsQC4Nsyf+ICoQxp8jOVRjGOJ+G8f2nL01bf/Xyep5jGQkZpul0OmXRlHIV9NAPY9+/+eplynLs+8v9pffBlk2F8RzwVGXN64u/hjLHLuukh63wYBmjLc5DVIjIsYNSPs6as6uzD56dK9W+sYHzbCulgwAITAxMdU5SVoUTZCZ0zgcOm9pt2hCCRyIpI8tAWVF0aQqnnM/+HlciSzFTVaJis2vtwsxY8Jx1Azetqq5gcEZZl9q+2Fk5rCsmtQaQJX1d2vFrXnvOD0qHae3+rzmnlbwWzhSAAgE4RIwpMaP3PKf8+HiQ3KkaOvd4ON3fPVzstyEEBdtvWu8dO5YsOethmr1j7z0i+uB3m/Z4Gvq+B6BpNmZXVSGlHOdY7KYfhvoYXr9qHdmbV1fb7ebdpy9ZPn395vW267569fLz3cPx1Fd13Xjf1cGHahz6j2hzjDEWreEzjF/QNWLPBTMmIlMpooMqYiKi5h07A0Qkg9NpyFmuL7aaJKtuu3aSVFfe8/7T7f3j8RRCeHl9daxCSokRqhDmOHehcezmec650OaLyt+512K69GOYtJACFHGhjBGWaaSl
 ri1Oixm99545qUrOmZAZUYAJ0TSJgCktjGcgMFQVkZg1GywUIjMkrIkdagBFzRINwLJhQmbnEZ0ZpKzTnOZ5nqa5GMfCGyd2jokZkFxZH+icqhYBVyIEoLN3LB+yMAxU4Tx2b0tBsxhSacevQFKh7q92CWVIfw36pa5XKLgBLV53wT9X77OKmxgAmquaJqV0OA2qaip1FSTrpmu2bcfE4zhOc5xTUrUUZ88MAHNMYsbMxBScA4QQfEqZHG93u5TSME0ppeAdmA7DmESmcWy7ZrvtRHWc5inGTdt2Tf3p9q5pmov9/vrq+uu3r999+ARMofJ1XV1fXV9fXbndxvi3h8+c+n6eJhEpnsuHyjHbKjKMgFkFwUA0RpBShaQ8i6YUTye32W3die/u7pu63m/boW8Ld+bVzeV+t5li3nTdHKdf//LbYZqiSDkEw9B37YaZNceYo4oyoxHTmkURAREvgt/ETA4KOYCWYdqCg5aStyT6wVMVWDSDFehOsJBcEZkwJpslg6krZsrMzlceagQzUEBH4GmZaYpqGrNaVjMgBjJnRIwiNk5pGMd+mPphlJQBrMwXOOaq8iEEdhzKDKRzxKyqwTtjXpC1QouHwpVHXuc0114RnuuedWPzMi1ccP6lolqxzhKun0DUdVxOn1WSxSJlzecXFMzAtU0DdfV4PPVDjwop5XGKt85tu3p/sa+rKqdU9qjePx4QoK6qzaa9vtzfml3utm3XTNNUenebribCH999FBHnXFZrm4adi/MsOeck4zTvdtvjaXCOHKIjAKQ/vnv/9ds3N69fzznNcxzH2aY4joOjFzeXV5f7i6tu8x9++49fPnw8HQ5xHDVnAHDeNXXdVMHMspqKOCYDO51OAgCZVLKplMS/ClUc43GeEZEc3z9WF9tNEmnbpvIupkzMnum2H+I8v371YpynOaamcobSDz2W/cEGZpbFQJSf5jYL/5KXze8/BfQWwOlZyVLgd8ccfCBUhwAAO
 WsZYxMAYvKrriwxe0fOccWEoFDm1s1ETZGQyJCSARM7Ju88MhsyIuWc5jiP03Tqh1M/pHkq8bocIu+9d857R0zeOb8sPfdVVdVVVVUBQIkcEZ3ppMuuxRW0KslWaUrpOfVZu6ylT7mw6kzLaJqt4MaSpCOWPTq0WnlJBmytLBhpGZqL88zOeeeD95IyMYlpnud5nh+PQ1WFtm32u00IPsXkvNt03W7blvp9t9s0dXU4nD59vq2bum0qM1C1qqqc45ur/Xa7yWIq0m66z1/u7u7uL7bdy5dXMemU0qaprrftw3H4fHt7c/PCeb/dbcVs6vsvnz7uu2az2Ww3u7/61Z81df0fu/b9Dz88PDzmafaEvq6NCiccm8AhhHIMtt2mrrKUPwCqxgDTNI/z7J2rQpVFjqd+GCdAeEn0+e4wTVMIXkT7YVgmNAkeHk9vXr2s64a5SE1JycsK/aWYGjM5ZMJlCKRgpEwOyia4Qr0lOkMzi1cycwx1IAACMFVLOUczptJ5J+fJOy6rugkNzUiTqZYOpQAoEDISOWQXgg+LPiYhUTZOOReGSE4pzlOcxnmeUaVM0TEzMlNx8lwmI533vqpC29Rz3bRNs2lbH0yJjblIQiwZZylo1rrz3EfiYlhqpR6Sxa0uWJusZq1rQ2lpMsGZSrKSa5/lb1r6+4DueDwhESMwsWvctm0FIOUiO5z7YYwxxhjbtmna2jMHR455Ur25vmya6njqvWNAHMYpHE9zTOM0Vd4H5yRLCOGXr199ur33pz6nfOqHz3eP37x99e3bF5++3B5Pp21X13X98PD423/87RCn4MPlfp9i8sTHx8Px4b4Nzabbvn756svD7Xw6BaJxnsFMY0opiTko/e2iCWaGAMxUheCCr9pWpvnxcDyqkBQyvzla7o9jqrwv7uHh8TiM8xRnRjyehqv9lphSlm9ef1XX9TkKFW9ScileSh+ErAbiEFUZCpy/gNNnhKlwiCyLmGRTUYPKMy
 EpWBbNKWdREVMxIlRzxFx7xwRQyHlZRFKRVzVidr5yyExNFXZt40MooLkYgFjKBAAIKlkkZclJcwaVUrcbrPMoi4myY3bBzXOwMl9F5BwbgA/IRCIFHi3EDlQDybl8PCIs43dSahxYkgKzsjF3CdxSli3COa+FonTG5xbXOq+9QgV2/pGZuXEYAJGYQgie3TDNarrdbrY3V4dTfzgc52lOMSJAVQUkvH14TDm/uLnqNpsY5+1207WN9+724RhT7vshjhMBsOOH07C7SCH4n33z5re/+66qw+HUG8D7j7evXr64udzPMY7j5MjGcfzhhx/atlNCQuy6hgDv7h5MNQT/6tXby93u9c2Lx9vbY99LSnmeQbToOjvvyHFgbupqTpnBCLBq6zevXjh2X77cMeH11X4cp34YPLskeY6JEczg9vauqirn3P39PSI1dX0/nNq6lZy3281ut51S9N4T01LWApDZuTsPBqqCCCLFUz6BjrZ2QWE1U0JidlVV1VVIKaJpYUMyITpCsIJSSFa1TEgelsoJc0oxDSklEQNgdlWNVeVqh23FPriq8kQIarNoEiO0lLNkycXsRVXFRNZeF6KqFE4goiILUc5sat67kKosOcZExLR+TEQsi6zxzPVcNywv2czyuHMHH/AZc54WT7oQmYlWqGrZMbLSROCpcaVLegSA4BAJCb3ziNiPI5h1bVv2Zu26jtn1x2PKGZBijGAuJQnep5Rvbx9Siiml66vLqq62G4kxHQ6HpmtD8DHlNM+3n2+/a8Lbr17udtuPX+6r4DdtfXc4PRxOlxcXVQjTOBZN4X6cRQyZRPXF9bUPPuU8zfHx8WG7u9hu9t++fptzDN7fO344nXKWGqBI3Q7DoBBdCGKmgMwURIfTgIhlXoCYri52++1WVac5TvMcvD8cj48Pj23bXlxcIGJKkdgR8uPxgI6TCjNt2rr2IVA4RyWAIvDLS5GEi6oblcGeEqnUgIsM1TJxVrIrIvT
 e+xBCCDmnMx0MAIr/K6BjykaQQRTAQAQkx5SnLGU1cPBGjJK9SgbLYNmUDagkuWiGpqZaCJAp5aLOvlKiYSnenpYciKqiQKmAVFVWPcKy+GaF9E0AHRMRmloRn4BlkGON13YWMoFnlNAClC4VUklyFBCXtc+2etelNFrnR8r/GQI6XwcAIGbnHDtnqqZ2PJ6yyPXlbu+dSsZproIHw2GMTPR4ON3ePRpCCN4xl/H2yntE7DaditZ19fHj55zyoR++fHkIobq82P3s69cPjwdTy2a3d/ffvn2bLvafb+9ubx/6YR7nuQqx67o5p007brt267oQguU8DaftZn95+eLPQqib7vu20/cfjg8PMs6FA2pmQ8o6TioCiOT9cRjv7x9LJOo2m6YKpbk0DsOifQI2z7OKxhinadZlcasgsyENw9jVNSGNU9x0mRMqETOvedgCvkJRpyPjJXxhqUNwaTItzFwod0MlpZhzLuMoomKS5yjDNEvKK4Vy8RyaJBGoWhJRkUJOMTMiqEUNyPvkfHRjJHKESOSyLbRTQkOErGpgxX+W5I/PZwhsAXnKxyAkdrSo1xqYZpGU85KflEX2RLAoZ67wvgEuumKll1foJvYUOgrAtLTdbQWsy4UyWqceoDSl7Jwi4DpEurhZV3QZGNkAVHWKMcZYBV9VYZxiQcjalolpTqmsRCrn0nlXhdC1TRbxzg/jBGb77abwmolwf7G72O+qJjgmEdnvti+uL999/BxjOg3j4fD48ubm8Xj68d2n4/GUUp7GqdzPh4fH/eUFImiMh8MRAJq6vX755sXli227vby4cm3zH/727x8fjjnmMjBUSAnMpGpxGBBwJHREoa58XdGEVFVTjKdTryLe+1M/ZjVDnFN+OJ00ZRERyuyYEVTy8XjabDomkiyZkI1XqJnKTA8qANhZN6SwnEiE2T2bqi1egUpDf44xxjnlbAaO2EiL85mTTHNcFIDBGNEzBbduvipebuUKC6Ai
 ipmKxBhHYgB0voBfCGXQouz/hGXAlxFhHYsuGQcTQZk/sXUAuTQ7DHJWp1r6SYgIzCbCAGVG2Vby3mqRZ/BywaHWLtOThN1aDOnZPMEgi66ud5F5MgNFKIt412lPUDCHZo6dc26a52mezdQ5Zu+K5pZ3DAApSco5WEEASUQNLHiHAMM4dm3T1LWI9MP48vpCzT58uq2b5s1XN23X7rcbVfvjD++7Tfvt29cXu+3d/cN+24hkNQvOGeAcY5qjqkqS0FQxp83n21AFU5VNF1Os6qYK1f76Zddtfl7XL65uXlxd/3f/w//j/R++xwTMDAbeMRKlnBAcmJFjDj6Kfvly54O/ubz0zAUR1Cwpp8qHnCKYaYxFb2xOsfNt1zTZ1HknotMc67pwTRacE4nMsNC0zYCJHLOacWkdFTdYeoLPaiRcun4mlj0B1U4ylqctNYtzLqdU/B0wMoKpeUeO0AC9Z1VX8FXvKHhGIDFMApANBWsHXApn1Zg1xmSSQZXAmACZzIxLMki0IOQlA8bSrSmsYU0irmi1qoFZFilbfYnQlMonRESwUiyspbgVd1mm5J7mjHnBbhefWnrFZcSq4BoLy3+1Wjgrh69oKyI6FUkpMVFb18xsZpXzxIwIcZ6D7+qqApvMrEwbIlFdhQJ7qVpK+XQaVVQBNl272XT9MKScry92bdPkJF9u76sqPB5OADDNcds1Kc3brg2e+1P/3Q/v+34owI2IxDir5FylL19ut7utqngmZjw+Pn5pPnKo9hfXdagqH/71X/5L7/x/+z/8j+9+/wfpe43zlBOxI+fIsqpYsphFzbxzdVWJSI6JmBxzVmFmIKtDJSLsuPQr0xy9d03beu9e3lxdXV7UdX1GAgUEoAhkoiqQWrFXOM/TFHIGmKrxsgeqXPlSk6BnrrwfmSVnXcIYIiJTGXgBoqfGtJqBISMQsQAqY1F+V0UzMqCshMaVq5grI1ZEE4hJxzmOc0opS15asqVmh0JtW
 XyoYdlju5CqSwZb2HIqqlky5YJJgRXxRirJI4oKIYKVDW24wqAABaVHW/v4RebgCdXHpRFma5vDZNG6XzT3cGm/L/ZdcAMnYpjyBNA0zcV+t+0aUe37YZrmcZKmCqPpHBMilgaDmbVtc3WxnebYD9OpHwhximmcpk3X3d4/Hk99OWgiYqpTTOM8A2IWAdPLi0tEmOb5y/3h/v7xh/efU8yExMxF2EPA8jSNw7DdbT27VXTTDodHHz5WoWq3F8zUtdt/+eu/AID/qa6///6P0+lkScoOVslJRbSkRwgK4KaoIsH7y8tLFbl/ePDOtU09V5WBtW0bQnCOh2FUkbZtmrqqq1AFf7HdOO/HccwplsLVIXrHziEXljygKqgYOjiDfcVSCQUQEFdhWwAids4D0BxlTnmc0zRHyQX21sLsoDKqU5TTJCdznm1tOoIYAAExsAIahZJKEgGwiKacj/30eBxPw9xPMeeMYExF8Q4IkLFMrBQneE5OIOsSAVQsZaUsRLlw82BJO4rntVULYrEkfd5kX7nJBTXlc/WOiCvzb4kqixkuoiVSvOw54KyggKoRmRNVVGW1nFLOeY6pwNCErKYfPn6JOXdte3190TZVGQgvcR8ARTITEnHsx2EYS8ibpjmL6KRVXe02HTEP09i1NTNPU5znJCIPj6eY8u+/++HUj8TkwK/NCairWk1Pfd8dT69eXjvvUsrTPIUQ7u9v67qu6oZdS4Sbzf6vf/NXu83mP/zHf/jdP313eDyMj8dxnrzjFFNOidi1bRu8F8nDlASwypkAtrtdW9XOk6myc5cXu912o6rzHMdprKtwud/VdV2CoZnVdZOZYpxzkR0yFikkUcgAZtHMiJDYOSQAKIkQEtPTyKNpwT+N1CyJjnOa5jTFmGKOsUhUWfkPIhAgM3lHCZSX51hLKNKsmhRb4OBDP6akAyGA6hjTcZiPx9PpNEzjVAi7YKXiXmaV8Lm+kkFxq4Rn2P1JwlJUEU
 nJcEXaiWjZu0aA5/K9IJq2Up/WLxErsSAvMX0ZIzyPdzACPBtFWrNYK52Bwo8jBWcqkiEjeO9yln6cJOdxnGOcY0oIcLHf3dxcXu53zFhCUzGlrg59TzFnQk051XXlmad5VlMTTWbznNqXTaiC907Xaax+mO4eT/ePx3mOwzghqCOXzZwr6xyEENqmnXM+HI7bTeO9c8xf7u6nOe62u+3mmGIMdYPEnvBie/EXv2xeXb14/erl3/7933/84X0YpjTNRIOvKt+2dVUVTKPIJ2Uzk4yio1mLVfCha1tmRsBt13318kVKkRDrqmqaJoTKAB8Oh9OpF0Mt60cyMBMjAljKgoDOeUA085JzKoU8Cy2Rs3iKJX4BQBYxUVWNMZsYQREIMUBgxy67Ocac85wTZYiJvRMm0rXqKjFymqkf4zCnOYn3wxKzAVK2aY7jOKYYJSXJmVau+1KLEBLAcgNxUaa1VYhgsdJ1sYfZMtJbyu9zik1Ey1KFIvVTxIYX4tKahK/D44Qrrrmw+59kSdciHp6p45QsFlZzBQNz5Yp67w2ggAumVrpH5NzVfvfi+qJpGu9YAWKcDscTIg5D1zR1COHqwo3DFLyP83zq+3GcmCirEmKc53EYr6+vjgD9MJpZP/Smev9w6PthnucqODB3sd8C4uPhFGNSkZQyEm67dp7j/cOhrjwTeucOpxMRD9MwTUPVdoGpkMSbugkvXrVNc7G7+Mfr33789OX+/jGXESUwIt50zaatifBwOFqZaidqmsYzg+mu28CicMRNs7m+anJOzBR8xc7N80SApjpNU5aMpaYWJRQgAgAkdEzOPfFxEU3VBISxeCtEOC/h0ZTyOKc5ZlXNZoAUPAXvRDUX+V9YFJ0KMxpMMz1Rh8tWxCzqsuacx3Fynpl96T5m0RijpKSaVaR0DhhtzTqXCkYLYPlUfy9qVnregQBwFn9U1TLnU6xcoQgzCSAScVEgxHUhaOEenjU4zkNwKwj7BKW
 dlZ7X7xReyfkcrkUSgLu8viyJ0zhNwzDUIQBAjDMivXpxc3mxTzn3w5RFihzSPCdEQCIXwnZTT+N4O47jOB8Oh2me26ZGtDLp4hz74K8v9oR4PPWEMEcZp2Pf90M/DMO42bQF5ri+vCjFLBM+HE79qd9uNvv9rh/GOUbnfcoCgHf3D0i46bZtu3HeEy4pOTm3313+5Z+1b796e39/++7jxz+++/H27m7sRxCZ4uyDe7nb7OoqNM3N9Yuuazdth2bj0JeCwBFXzhNRCKFcl3Kx5mk69sfj6RhjArAijZAKW4UBmDwXgkhpHZaq9KnjBypAvCTRZqo6pzzFPMecRXPOIooGa3O79ACXDmT5pXMxa2eUlNCxc75M2RAgRlGNySTnlMq0foGZFhmwFRQq70nO+jTPxjAUlh6DrIVSCfZudatgpgpFuhEWCYYyUmIASISqi0bVmYas69rLc9FjtkxaP0sGFnB0JSmYnhforGQ8h4BznEVVRQkoZjHTum6cc6Y2T1MuG/tirirftU1p5DLR8TikmCTnOaaHh8M0TT4UnVtYHAriy5urly9eTCmZWdNU3ofTMKUsh+MppUxEu91GVA+nHlfN4q9eXn8AIIJffvt6mOLd42PK2RGrChI/PDz+7vvvQte+9Z6ajnEhDDlHznVN017tr169evPN19+8+/DuD9//cP943227b796/ermRdM028120+0WEA+5LKlMaZKcoQD1gDnFnCIimshUzSI6TqPKIqirjOyYEIiQbZ3bRCvoI7GpKS/PvwCB610yBHLes3PFCJ2Dog6dRUubsNgnsitEOzAoQmLl8UXVh5kZSU1zloUwkXLKWUVKP6Cg8aWuIgIuk+eFf7EilbgO9JXcgxa6arHWkgoubQJf6n9bXKCaiepKgNflZCmULGoRTVvU1IqZPXXVAUB0ZZnimsICrMjU8m9Yl9CVpMGBQfB+nGbnedNtQggiEmPMIqdTP05T2zSd95uuqUKY
 5nma0xyTY/PBplm9D1nUwPYX+7atNMvx1AOY926aYZomZmJ22+2mqYJjPh57WE/vMIzeu8vmYp6jiHablpnmKb68uZzm2TFf7LY5y3EYsggCpDmq2u3nLx93P1xuLr0PhStUrjISMpBv27quL7b7b15//Zs/+02McwhVW7chBGbnuMyEcYlTRR8hS1X0wMqMmk8+zU5FlWVj29ev3w7j+HD/YIsXRLd0OYjo/1fWlzVJciTn+RGRmXX1NTM4uFxSF2W2K5ErM+n/v+uND6KkXRDkClhgMcBMd1dXVR4R4e568IisWglmAHp6uqqzMj38+Nz9+1idn1zQWEpRJAmhdlP8OftWk5mllOdlllI8x0VkKRKCqFoRLSIkAkgamIOKKVyHs91HQy6FAJMlU3XuQdEqOysiCEBemTumBQQI7LI+jfXzNtK669JqGSqqwaskqG2BZlIu0MhgtdejToSMykxaQU5rkRnQ84HW4L01U6tbRvVv1x+whoRWd65wfR1AcPVSJNoOQ9/1S8mlVMYbVWPmzWZ4erjfDN1lmqd5MdVSihO/Oxf9MPTvnx4B0aSM05RTZkZjTql8+vw6zfN2Oxx2m5KLiuRSxnHsukiE0zTNc9KisevLPG/6+Hi3/+Hjs6o93t+fp7mLHSJGotp5Q7hcLudxkpxU9Le//YeHD18hddxGN7w7gszM3HX9brev9wXqsHgtN82s0QICGHpbEsHQQohFjYOYZgQk5qHv7w53aUnTNJqpAZmq6/ENnfkABBiIKl5LYOe+8yfjqjAyT+Pn1+c///zL68txGqciDjeKk6aoqKhTpNZ3qdRHVbLFAA3N54mMauTVai6ga0mCzQIRkZkMkQCL2G1W556p9i49eiOymvhIlROnmgDwWtiLmvtJIoA2fK2qa9LpDrH6TjOvnHTd8GwZurYRE2g+3tYtu+ZzYXXYZgAQ3o7HGFtLvZRh6O/2OxF9O51yTiJlnufz5ZxyPy/L29u55
 GSAOZOo7Laboe/6vgPT4/H0w48fc8rMpFLmccIQlmVZ5nE/DPvd9vPnl6UUr+aJKWIU1SXnl+Pb0+NDCHwe568/vNvvt88vx83Qh8BLSkSwpDROc4yhjzGGcFnS88vx+z99/3j/sNnuAj8iYOCay9hNWIkc2u1YCwJoGb0nr6xKCFiggLgcHnMIkhEJQQARh34YhgERiVlFVKGYGUhnUkRCjNbc91ooYcXFEb1iMjVVkTKN8/l0fj2epmnKjc/H7dLLlrZ0Ve0T2qav+yesOiG26mFQm568AvBQc8/A6ImWeJG+Tk4BGJDnkYIGBmKgIkgC6rMr4lTsFfRZf6kBqgGajyJoLanEqAHvZiJCiC7S0owNWhy4LjJdO24tC/KkuDXuYW2YAkAIxKaaVZGZCWMgKdkMDrvtdhgATIo8v55jnLxNstvtzOWMnE4CcdP3IllVY9+B2WWap3EG02GzSTl/en65v79/ergvuXx+eSPip8e7nPI0L0hoqpfLqGp3d3sV/dNPv9ztd2h2PB6ZaLfbiGDX96/H03gZN5thM/QuIvrp0+efPv707sOXsd/2Qy/1Pq5OAqhlgQAVAGlnst4XJTJVInMw3b2R184q6uALEYUQu9gZQEoZfBSXsYAtuVAoUSSEQOi7H0De+AarDIZmagqmueRxPL+d3pZpKsuUl1mKz69Wxl2/t2ujUOvwbt1+YGifytvooCsg4+nlLcaNCIM3JwFc9kUdpScI5DcBtXLZeHPK1LCospqoUx2YqIooIXmI57oMCKZeBiliPZFXG8QmdLUS1xi0lYL6ceg6GVoN0LtcazIDdSTP3w3MLABiKYUCMWEfY8kyp0REXYw+R7Lfb0MIOWcA6AIV0Wle7u8O758ei5Tz+XyZplJK3/dfvH/69Mun17eTqCEAMavZ5+cXJuqHoeviZogiQwj8nN92203fdy5DP47jMAwcwsdPL2/n0VSnRZBOSLjZ9M4abgDjOKtoiNHMTu
 fL9z/88PDuabPdc3hX0y5ne/zLzHw1WWtbyv7hGzRt0HSaPaAhAsWAhCKEIhx1v9v1fX86n52d3gyJTYXqHp96YlmI2Ut1NrMmpuYdRSnldD69HY+XyyWnZCpoyqAG5gsRRlghyfbItbJU1b0BbGABIJhhRTRbp7FB78hM3n8l8uFiNXBQBSITNXhStdgKdXp17WqctR9gzbZuzknzSczV7RVRpnUx8KYcXKWMW37piBys43kV62xYl7WEWGsIFFXPygAsUOAhUAjBzJ7fTmCKhkRUYjkcdtutSydE3G4A9Hy6HN/OMYanh7uH+8Pr8fh2PuecD/vdbrv55dPz+TyaqsvuDl3UUn76+dM8z09PTyHGIroZBgBMOe0326fHu58/PTNzP/S7bT+nwkgIQEy7GBDhMs4xhE3fxxhUShEdpznkEiMvRb/7/gdTtSz/4T/+5v7dB7AOIzrFu5/mFQJuRnnNyn1F08lCEFthQYjAaOjxrmZViJvN9v7ubhwv8zyr+Bqw525WREQMUJlNxYRVVEkNyZrFGwAUkbQsaZnzsoCUgCaEhAwVbgQDqNTlqutz9cul1nQxrOhhhSuxwuBQh4YBCTvGyAhIapDFzFtchMHFnRAV0GWDwMWV3DE2tQ7fVwb3kQ0tU1M0VFUAAjK6mZzzH0b0ee3q9lazXuv0m8B17ba3L2ogcNIGz3Bw3ZtFCA93ewMspcw5iyoDdF1AgBh5uxnAUEUCD10Xp2lKuQxD/8X7p4e7nWp5PZ5EZDP0h21/nufTecwixIQIMYau786XUaWMl5E5/NXXX0oX53kexzEQbYZutxmeHh/MIAa+vz/Qeco5dTEQQgguRCmXad5vN/vdxic6xmlx+DCGYGAfP/5cyj9eLpff/Pbv3335NeAWQsWr1nB/a6YtiGij3FAEcHYh8fBmLh0qJTuvTlHVwGEY+hACE5takYLkQhlkZiLKEQFBzLhKZNwsjps6WsQhumvSym5cGVy
 l+W6wSuB5kz+1SgKguk/w1LM2LeufwQDBZ1EdlzVA5y8lAmaMTIHaOryjN17yAAIYke8V14gjaiKmCiLAbP4ZDE1rBQ9QkInwirRjQ46qoa833k1wRVrMfbOfqNZeu75LS1bd8B3+N8MwjbMSDsOwDSGGAACBedN3+8N+6LpSJCX5/PLmyf7Qd/f3u/vDYUnpx58+vb6dnAT55e3y/PnldDqbKiFRF+72u76L4zQ59QO0RH5Z0ul8GS/TF+/fpyJdjH/7619N8xwDH3bDPMNhv0ulmFpOi+MvXeDArKrD0HdddxnnlAsiRGJDez0ef//NN1nKfwZ7/9WvzIYQAt2YI1xDjyG6Fps3bNVVNNR1F1TNtOSU05KXRaXUghiBOOx3hxi7k15KKWoaiVrCVyFPJ2VgDkxc/UR1f6amIcT7+4d3797P8zL3s5QCKqWISMEiRQTMyCmTbgGgmpkhNoYSJlwVDQzASbrALBBFxsAElZRBTTUiMFEgjETtfQEN1FX8GgZaTcopSwkQzKVOTU3VPKCDl2a8+mtPmB1sa7lTm51b+5a+Ue8zTOhQEa6Aa2WE8bH566vWFNU5DsDCOI4xRuw6ZBpih2ix67549wiI85xqbtvY8D03mZd0vlwu49jHsN304zT9/Mvn09sJEZ32qO/idrsRKYgQ2CcmZZrmXBVXMRV5PZ7u7ndE2HcRwAihlDIvqe/C4/1BDVKKx9NlSfP5TbvY5ZReU9ptht2mI4J5XpJACCwmb6fTH7/7bhiGjsPjF19Rpb+sYkt2Q0ForRIyAxHnGpOa+bkWr6mWoipLWtCLAyJk2u52T48Pb29vS81ZDcCHmdANXW/ZCsxUFEyaIg0S0n63+/qrL4ksLYsvNqioqlzGaZ6XeZrnaSpLKqWQKoAFdBQIGI3b4g9C1Qx22fM6gk4YAwYmYgSArCZi7JA/YSAicn5iED+S9VKB1ll2aPpETeLDTyY28NIfELtCcsud6gWtZnU9
 sjUFsAa/39ZSUAv2inf5oamG5ZVuS3v9r0M/9EM/IMA0zqJyt993MV7GKfbdYb81Uzf81+NbSnkz9OM4//zLc8o5MMUuppxTyjHwbrdlDt5q2++2fcfjuLgmZwisIs/PL0S0rmPPKQ1Lt+kjaNkOXZFSSpmmaZ5HAnh4uNvvhqGP07y8HM9pOauUlHJKadP3wzAMfX8Zp2UuMUYA+/zp8//I/6Sl/P3v/uvDh6+67trMWH1nM9BKoVNDjohlM8JVAAAU0klEQVQrvZqolmyiOaWcZibKOad5jl1HHIZ++PDui7e3t1LyspiqNXHJCg/lIlEkWhsG97holVWGiGKM2+3m4W6/LAEQCTEwRWZ3iCmVcZrf3k7H49vlfJnGUdLi9IDNkRpUGseKwjuvCSEGxuCDz0h+4hyAgNY+1iZG6KCStQwB0Nj/Cq6dr9XkzEzBGKyIsoFVeWFTA18guX5OJCKHe625hVoJ3DyCa24KrU/QlBoqCtaK2mazAIgY+mEAAG+YbTeb7WaQUhBx33XELAJD3xsYB96Fbd91x7fTOI5mlgmnKQGCSjFVV84khN7FN8w3zDGECICvxzMAPNwdfEp/M3RoWkqeTYjg3eNDypazT/rJp+eXnJM3n97d7fuue3l5ndKSMuVctBSVMgzDZujnJaWUWIRDPL29/f6bf0bmf/gv/+3h3ZfO/nzrO/0fUZ/WUfBSybF7KVKSqJaS1URUfTt5LsnA+h4Dh81mc3d3fzydVM2joWuEurn7ZISocuU0WNP8+gUhdTEG5sLeyHTRTQ4hEtHhQO8M8ldf5JzGcXp9Pb28vr6+Hi+ns6YEph5IoaV+DUqjwBhDna9KIimrgSFSdAYyX1ABUDVXAyX0IU5AF+9qdYqDkQAgarG6TwfjIXBYz7rBKg2vWNeDV/j1mne2fytE2tgc1vdABGCsaBo2LMKal8WbWxfSkjwJ22yGu/1OVHNKRLTMy+UyLkvaboa7w/7x4T4wP
 b+8juOcixRnU0FERBFx3h816yIPm6GLQUWYYxchMJqJFEHmYehcE5sQRcsyT8qMaGPf+XOMgU3AVMZ5joGYsKiGEB7u73IpqjbPKiKXaU6lDH3fdbH49EaBjPD6+vLNt/887Ha/4e7w8BhjXA+iNXTYP3xt/YmoFKtTOwUAfcyAiUwEwPq+rxQHgfuu326Gvot5SX43vdwpIszk3SMRp0EXqgU2rc7AzFJK85JySj73TQhMAbFGW0Qahj7QQR7t668kp/R8PP7y8y8f//zx9eW1LKkOVACuOyRdpMiIxISYRKesZtYxxUAxUDWOlte1C0aDRo8P5hu+qwFds1t16/GMYuXmNwKDtm/kS/FNCWSt3GGt8lcss/72ViN52ipqDbwGaA/Jv1rzB0MI5/M5MHd9L0VO5wsTiRlSEpVlXsQgMOciqioERHQ4bJeUx3H26biUMjP1fURERo0h7LfDdtOfx7mIhIAxhi4ENT0cDofd5sePn0rOgYmQci59F/vA87Lst/22C6UPS/bDDSKiJRFGUNh0PAz9ZZz6LoyLOgBNYF2kwj0nzKWAaWQoy/Ttt38Ydrvf7H7Xdb3vErWehqeL2cSk5JyWZbzMy1RymecplZxTulwu0+U8TvNlHKd5STlpKSpKIbhevKn1fTS1yBzIwRfMRTopMXSe4NYUoiVStgY0oixlmhcmCrEO+/SddV0k32NzbXeiXb/B3e5wODzd3x/22z/+8fvnnz+ntIDnnWgGFKOTgRERLWJzElGL7AeMAyIhFKgmogBeqRCBKDQYx9bGWgvX0K7XmhHXf1SNqxRj7QtVf+m912piCP/PxmBNWf3+rzm6451XIKBG+YoDXAslAAsi4tG5SOm6oKq+u+XEYPu+N7Pj8Q0A+j5uh76PztqDSwrjZSKSYdOHGEwNI3dDT8zTtKRlJoTtbui76NFov9vMaVly9hnJ2MWHIW6HIVdRoqHrY99Hn7rBSgBquWRE6EL3dLc9n0
 8FYb87FHGiIjrs+t12k4ucx7kU3Qw9MlrKnz5+/D/dN3eHu1Jknqd5msbxMo7j+Xyaxul0OU/jNLrk2zy7Y1MEyaWUDIjEDKrMrAbe+0OEEHjouoe7u6en+4fDduBoToPp6aC2Hnq742YOULuMhnEI93f3l8tpmWeVUkrxOXaHG82gw87bWqoCGfqu32w2zCEXOZ/H6TJKya7KbIDR1eh8lUp0TiWLej3KgYNXRoBkVgB1PSV1J7giTf57nV1RTU2JfdXMTGpC1BDZFnDV1h+t5FRt2xOc56FBntU8bd3ywCv1/FpI2fV7a+SH1ZXXEN910flPOAbRK6Y2dN3Qx1JkmiZiGvqeENSsFCHEh4c7MJgOyzLPuRRABAZVNrW386Qli8pmGDbD0HV+t5AQUi5oQkxD3x12w9BHM8tZN32362PH0IG8XfA8zmYKpqbFFKdc5iUzcd/15/PRpyoQaMrLp9cLM5nBksu8ZFH10XH77/8YkBgxO5qj9TarFxzXxAjqkSfutwN2Xcm5C7zZDGYQQ4yBY+RI9OMvz7mUKRc5HqeUXt82D4fdX33x2K0TVQamUkphLoGDEjuICC3CMtF2s/vw/ot5mY8vn6WU5Gv12lAEr2gYzCDlUkQGHQKH/XZz2O+7vgNE393pAm0iIyIB5iznJKUIB45MHVMM3LEPF4MaYsVcnSyy1s8ebREAQdFFPAy5Si9ALZEcuIHKxlvb5dV0gAwIkW59J9xaFzTErNke4K1vrT93BQpuav+bFwJAIGIzW1LCXJjR5w4J0VTO55xTJqZh6D3zm5dEYBy7/XbbdbEUSWl2AF9FUhFULVIA8eFuv9/vNpshBEKRcZ6nSS7jlHMhMJMyT/M4TiaSckGiTy9v05J1WS5LHuekRZFIAXIWABAz8WxJ69SPj3m120yE6BpnDdhBW7sUBsgUQyBmcNhTxZ+CAAQwJCZmy0UAus1GU0pzssBzLuxxvGgMrKWAWVablgWILks
 6nS9/92//+uHurqiIlFzIAEMouWRiFqnL5j5LogYU+O5w//WXX0vJ57ejw3ArDsUUQlBELKLTNOWS+3642x9UrY+hC9HxUCbqmAlRDHKROUkRDUx9oL4LQ+AQAiCKT0itjsjvk88p+7QIgrpUMQAYtLUNU1UPCBU2p1r0GxgBrrdX0XvuWOEqwLrCVZVr3LPC+vv/vy9uc8/qpKtZ37hY8CIJwYpqYy7H0MUuRwQTNSbaxk3OZV4SghHRdjv8+ssP7x7vmclUcu7PfZ9ymqZFL+NlSSllM8spv7xeFKGksizzkkopJeUiClbhx3ppdX6mHqu6boitaXbTCmqwBDa7A+v7jpjHcfJDSE5I4LEMcf/01B/2m74//vSxlLx/fNrttv12+Paf/rfj8GQmZqRqpkKBCEEEkUrO5CR3mgWgeNXffq2ITcvCRJ9Tmv7Xv/zm7/72w9N9kQKZkIJHM5FCRCLA7PHNPwrGrru/fyqSfzSYLqfWLii5MC5zylkNRORymVLJm83Gw/C8LFnFAALjEJkIFUBFp1SKKBP2kTdd6COHwNAcJCGwuZKx943AsClyuHX6NEkbdyqiQK71B0T1IBOCoSkh15ebI/dOzF5dI1FFNFuqXQe5V3d6Dd14PRPXx3vbC71iyf6tUHLy5Al9Y8EZ9IoQIjEjwGW8iKgUQcS+6/a77dvr6XQ8zSm5bsU4ZxE5T9M8LUWtqFopVmn1aibTnH514h5dvPVgNXmutSZWYgH1DAZUCakWmW5+oqYCxIioCiLZcy4T9SBpRZylo8zz01cfdveHy/PLdLmcxwsEvFxO5lNkhABYZUGKACr5vTUwU9IqKaBtrZ2Q0JCYDbGIFjUEyHn8n3/419/9p39/t9+JFNMiJQsHQhLMAIgoCGrkhIcIiH3fPz68E7XnzyG5uA8REqVi83yelrTMaVkWIALDE5/Ssnz85WWeZkJwvnA1SKJLKqkoEXaB+8Aup+SR3Wyl7qqYe+X2
 rF4R18UkA59RMbvRj3Oks9qKGxgSVO7/6u+rHdY/tI3N6voaRXRD8HHNAW4zgBs3uXaQbssydzSuduqqJb6SYmVJvl4j1mZbmkTZZZqej2+ilfy/fmTC2lpoJQJWVj4lbAwpWHtdgCDSph0QsfaOgYmM6s42EiNCSdlXPZG5nTIjZslFTJ3iAwhFPBkidRUT36pBBIBlGv/0zbcqlVQ7ny6fz2cRicQYgvkcbt0R9t1bD7VcJ46d1c33exAICAjNVPyB+Z0xe7mMv//2T7/7zb+LwdKyeoyq6mCmzExgQAZWPQ1z2O/uum5AxMDRZ2tSyqfz6eX19Zfl87RkMNAiry+v8zgdT6cyzQGRyTvizvUiRLSJFAPHQFWLEcBXLg3RAVkD1FaA0NWJAiBBZchBbfi8udxUq+9xHQiogZtW4hv/ayc7gSY5fq1+/H0rHEa1adRstMX0vzDF1XGu7+/1QvCRQZeq0gYZqpo6S4CPmdSBNChiAE2/o5Ua0LABu7p09AYxUWMd8DlwIkV0CW0MoePgFYK7F4C6mswtlDiJMIhKyVKkpkn105m07BsQjcldgVNZYtVGA1N1/9RSPehCrFmA4xwtDtXM1dPbGLz1ylqRca2llvPUARMBB4rR1VOeX0+vx9PT450VQsoGUKT0XSche6EVYgQItfAQUZG+73fbfQwxhFCLNpGnp3fv34/v3j1/9/33f/rhxx8/firLwqBsQqaAoICmNmXJRZgxBu4jd13sYqSG5DhCpAZSR+zB5wZ8mUIrY0MNWW4lAUGrAyEidI7bhiBDS5QNXSCs3kqqVdJNZolVMQLEqq/0b1q7t6s1ri7zLxwnrq+oxGMAEFyVR1Swvq7CqvWtoYqGae00GFRx3CpgX/1oiyX+cg4hdhGRvOpUNS3ZT23d0DbQItOSKmq8TgqQ3xcFAAKqd0FN/fIIpW2ZWz0O6IeePG1vC0cYWK4pbFuaRIC6OuZsa2ae+SMGZKe1r8hKQ
 mLWXFwecm1DOQRpAAURuZBqiB0xTSn//Hz86sOTc9XHGGIIoiZpURWLilh5AXzXMYSw6frA0Yc369YDABKGwLvt5vH+/unp6Ztvv/3lp5/yNEYCBhKDopZLyUUNoAu06UIXA4fg5Q9jZRfzR1H5u7V6eh+sqv6krntAnbOzSqy/XgggNk4cvHGlNTez9Ut/4Zo1ErVhwesNtxpTbQ33t2DcreOE1pLFduGIEJD4WgFUC7WVuKyaf8Vi1y0T0yY9iIDrgun6i31QB8FW2vaVMkpbc78+b0Q0tDVNrhJEgDXBUKzTWw51oI9F+bVRQxCb0/ZLVIXKKOd4HiG6rwVfloPKCOzllBYLTP5snTLJ3bKJBGJPNOv8uidrqGbq6bMVEcwAQcwuWTwKeoOVQzA1p05ELGZKRBy6GHo/vYEjVa5Or63FzCU+LIbw8PBwOBy+/uKLf/3jv3z3x2/H1xfJUkRTKbmYmXWRQwgUmCudrgJQMdCGuzeXYT4jpw3P/EvPtaZ+BFUIgpl8lg4JyYeu1dwH11vdkKgaM2m1O2gWA1Wy+9Ykrj1QuO4bXd0qrtkqXOslNQMIfoIcFHA/4Y8EW+BTbaOyiCvBxNoZa1F9nT8ARwOaL/KOAbpktyGtVJrNE3qeBC1/bR+hJkHg5Fa3rn1NGLQ11xBx3QX09EDluhpYf+paollTP6nfN1W1AoimRlWWDwCsqCATuC6Lj3+hVdm06h8MDdCMmNOcUirUU8qFQzAtuRTnIo0x9v1mt9v3/cYNlOoGRXsHADMrLu7BrhGDzPzh/Ye7w/7Du8dv/vD7H/70Yz6PToDXRRpiiDF2zA5OO/zu8TG0FhGs8xCoq/34HSGAutJqoADMFJidv5KZXfpbnaaHzNePSK3uK5iBmrfSDUypqsyoGQFBW5harRXbI7uxSAODdZMJrwG/rot5IxqJQDVwDKZa1XEBWy9r3SgADq3DBUbEiOsZrWRU9QJW4/c82we/Ddb2N7
 QawVq6oLCOfpEUBwV9VNHM5Si6vjvsp5cXLeLKj+t5rElxjVGK2OgrbnqMyEziCJGfKN9Qg7prVtMEBEABIzNP9LniDz6dWdc13ZzNFOsX6MvhaErABJCWdJnmGNgAci6qMs9zEYkh9F3Xd7ELMbDLDVQc11o9a35hKgZiVlFFzx77fvvll7+6jOPHz8fl9SSqMfIQOUYOTB7Trfmu5ubqLVh5ofysqvojaoPrHsF8x44CkEsmVTkGf8wG5s7K6tq7Qht2r12g5tFAa/sDKmiDayMNWm/dh0wBoA0A6C26szr9a0dU1cBC4GDkV9+yzBXz9yQFVwqK1ZFdE4LqJAFahmotCBgYmppJqb/MBZkBEZCYQgzU93le+s32/a//5vP3352eP/nkm98/DhyGAZlXuoHmQ/2Wo62p/urHGxeeX7yWQmui0k5KayZXdwrY+hmInjzUil4rhQG1zNMqveD67mCEqqiqhJRULuOy7bsQ9ZxzkYKEfey7GBFRREQzC4OBS87VigRJwVRFck55RgAIaIDcVkSRHDp93O52hhRi6ALFyEhciRPBRF1WGOlKKltp3hXWR2k1ufSPpVp/DpGIkchtE4m9dFQzMedchIpauas29CE8n1wz5JrOX2MgoK9IA1R2xTYnbpW/pBXWWjmhK2TQ7iy0+RJ/r9CYxFo9ezPLX61hHUg0xXYckdZThboiKus0YU05tCqPQisIyTcfgTnsHx9ou3n788+B+fD4eH55eXv5bG1AC8yAKA5DWhYTqQOD1bdd8d0GSHjMaFnP6rO19javiVe7EbWkb6mtf2y8Ge+u1r/CBgBonrbadZxOGwsWqojlnEpOl1HUlJm2my1VsFZLkZQyGDEbMleCFwaj6ziJFEHweX8hysxBW7L3eH//b/7mr89vx+V4JIDaO1YpAMVA1Vu8GJmtldV+gNvYKtQWsHtvtSKVS9DpJzgEInZtEiBSF7kz8HWkla+UjMzZmKAen+Y/cS1
 IoO3lwU0BtP5XVrTJy6D66JzpxLHv+hRW6wsVWGnfamlus83V/66bMrjaH4A76ht7lurq6oPFa+FVk0VCNB8QE6MlA6CJ5mXG1kZyX4vMHAIA2rI45o/12lrN6G/rw93tTNj16AEAEhN4U98BPzPTumpOgOZdaPVHxL5eiTXpqEmVrSxbjfN//WDr/xTBEFR0zmVKScWQMDCbWS4lFAakEDRnAcgBkFwD07nNlCpMiBS7IS2jSVYzJEk5OTMCMyPS4/3h3ePdz9MIIoC1/WMAYi7dC4YE4Pp0gK5600YLPexXIohKKA5iQOs4FDMyM3GFPG+Ki3rP1JiaMmqlGSOX4VLzHN3M72ZlnKwv9lu5GkmN0c3D113VenbcsNHaipQ/7v8LuhRpugWd6nYAAAAASUVORK5CYII=)"
+      ],
+      "metadata": {
+        "id": "gW4cE8bhXS-d"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the Apache Beam pipeline. Use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://BUCKET_NAME/*.h5'`.\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "eB0-ewd-BCKE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        " # The side input used to watch for the .h5 file and update the model_uri of the TFModelHandlerTensor.\n",
+        " file_pattern = 'gs://BUCKET_NAME/*.h5'\n",
+        "  side_input_pcoll = (\n",
+        "      pipeline\n",
+        "      | \"WatchFilePattern\" >> WatchFilePattern(file_pattern=file_pattern,\n",
+        "                                                interval=side_input_fire_interval,\n",
+        "                                                stop_timestamp=end_timestamp))\n",
+        " inferences = (\n",
+        "     image_data\n",
+        "     | \"ApplyWindowing\" >> beam.WindowInto(beam.window.FixedWindows(10))\n",
+        "     | \"RunInference\" >> RunInference(model_handler=model_handler,\n",
+        "                                      model_metadata_pcoll=side_input_pcoll))"
+      ],
+      "metadata": {
+        "id": "_AjvvexJ_hUq"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "4. Post-process the `PredictionResult` object.\n",
+        "\n",
+        "  When the inference is complete, RunInference outputs a `PredictionResult` object that contains the fields `example`, `inference`, and `model_id`. The `model_id` field identifies the model used to run the inference. The `PostProcessor` returns the predicted label and the model ID used to run the inference on the predicted label."
+      ],
+      "metadata": {
+        "id": "lTA4wRWNDVis"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "post_processor = (\n",
+        "    inferences\n",
+        "    | \"PostProcessResults\" >> beam.ParDo(PostProcessor())\n",
+        "    | \"LogResults\" >> beam.Map(logging.info))"
+      ],
+      "metadata": {
+        "id": "9TB76fo-_vZJ"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "**How to watch for the automatic model update**\n",
+        "\n",
+        "  After the pipeline starts processing data and when you see output emitted from the RunInference `PTransform`, upload a `resnet152` model saved in `.h5` format (for example, [resnet_152](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5)) that matches the `file_pattern` to the Google Cloud Storage bucket. RunInference uses `WatchFilePattern` as a side input to update the `model_uri` of `TFModelHandlerTensor`."

Review Comment:
   ```suggestion
           "  After the pipeline starts processing data and when you see output emitted from the RunInference `PTransform`, upload a `resnet152` model saved in `.h5` format to a Google Cloud Storage bucket location that matches the `file_pattern` you defined earlier. You can download a copy of the model by clicking [this link](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5). RunInference uses `WatchFilePattern` as a side input to update the `model_uri` of `TFModelHandlerTensor`."
   ```
   
   Clarity edit suggested. Also, it wasn't clear that this would download a model, which can be alarming for someone if they don't trust the notebook as a source (or just aren't expecting that to happen).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on pull request #26048: Auto model updates notebook

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on PR #26048:
URL: https://github.com/apache/beam/pull/26048#issuecomment-1492075779

   R: @rszper I mostly pulled the content from the webpage.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] rszper commented on a diff in pull request #26048: Auto model updates notebook

Posted by "rszper (via GitHub)" <gi...@apache.org>.
rszper commented on code in PR #26048:
URL: https://github.com/apache/beam/pull/26048#discussion_r1156175036


##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the depencies to the requirements file.\n",

Review Comment:
   ```suggestion
           "# Write the dependencies to the requirements file.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the depencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse`, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrives in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. To read and pre-process the images, use the `read_image` function. This example uses `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the Apache Beam pipeline. Use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://your-bucket/*.h5'`.\n",
+        "\n",
+        "  **How to watch for the automatic model update**\n",
+        "\n",
+        "  After the pipeline starts processing data and when you see output emitted from the RunInference `PTransform`, upload a `.h5` `TensorFlow` model (for example, [resnet_152](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5)) that matches the `file_pattern` to the Google Cloud Storage bucket. RunInference uses `WatchFilePattern` as a side input to update the `model_uri` of `TFModelHandlerTensor`.\n"
+      ],
+      "metadata": {
+        "id": "eB0-ewd-BCKE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        " # The side input used to watch for the .h5 file and update the model_uri of the TFModelHandlerTensor.\n",
+        " file_pattern = 'gs://your-bucket/*.h5'\n",

Review Comment:
   ```suggestion
           " file_pattern = 'gs://BUCKET_NAME/*.h5'\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the depencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse`, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",

Review Comment:
   ```suggestion
           "1. Create a `PeriodicImpulse` transform, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",

Review Comment:
   ```suggestion
           "dataflow_gcs_location = \"gs://BUCKET_NAME/tmp/\"\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the depencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse`, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrives in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",

Review Comment:
   ```suggestion
           "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrive in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the depencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"

Review Comment:
   ```suggestion
           "    model_uri=\"gs://BUCKET_NAME/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the depencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse`, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrives in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. To read and pre-process the images, use the `read_image` function. This example uses `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the Apache Beam pipeline. Use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://your-bucket/*.h5'`.\n",

Review Comment:
   ```suggestion
           "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the Apache Beam pipeline. Use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://BUCKET_NAME/*.h5'`.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://your-bucket/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the depencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://your-bucket/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse`, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrives in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. To read and pre-process the images, use the `read_image` function. This example uses `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the Apache Beam pipeline. Use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://your-bucket/*.h5'`.\n",
+        "\n",
+        "  **How to watch for the automatic model update**\n",
+        "\n",
+        "  After the pipeline starts processing data and when you see output emitted from the RunInference `PTransform`, upload a `.h5` `TensorFlow` model (for example, [resnet_152](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5)) that matches the `file_pattern` to the Google Cloud Storage bucket. RunInference uses `WatchFilePattern` as a side input to update the `model_uri` of `TFModelHandlerTensor`.\n"
+      ],
+      "metadata": {
+        "id": "eB0-ewd-BCKE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        " # The side input used to watch for the .h5 file and update the model_uri of the TFModelHandlerTensor.\n",
+        " file_pattern = 'gs://your-bucket/*.h5'\n",
+        "  side_input_pcoll = (\n",
+        "      pipeline\n",
+        "      | \"WatchFilePattern\" >> WatchFilePattern(file_pattern=file_pattern,\n",
+        "                                                interval=side_input_fire_interval,\n",
+        "                                                stop_timestamp=end_timestamp))\n",
+        " inferences = (\n",
+        "     image_data\n",
+        "     | \"ApplyWindowing\" >> beam.WindowInto(beam.window.FixedWindows(10))\n",
+        "     | \"RunInference\" >> RunInference(model_handler=model_handler,\n",
+        "                                      model_metadata_pcoll=side_input_pcoll))"
+      ],
+      "metadata": {
+        "id": "_AjvvexJ_hUq"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "4. Post-process the `PredictionResult` object.\n",
+        "\n",
+        "  When the inference is complete, RunInference outputs a `PredictionResult` object that contains the fields `example`, `inference`, and `model_id`. The `model_id` field is used to identify which model is used for running the inference. The `PostProcessor` returns the predicted label and the model ID used to run the inference on the predicted label."

Review Comment:
   ```suggestion
           "  When the inference is complete, RunInference outputs a `PredictionResult` object that contains the fields `example`, `inference`, and `model_id`. The `model_id` field identifies the model used to run the inference. The `PostProcessor` returns the predicted label and the model ID used to run the inference on the predicted label."
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm merged pull request #26048: Auto model updates notebook

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm merged PR #26048:
URL: https://github.com/apache/beam/pull/26048


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #26048: Auto model updates notebook

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #26048:
URL: https://github.com/apache/beam/pull/26048#discussion_r1157231223


##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://BUCKET_NAME/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the dependencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://BUCKET_NAME/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse` transform, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrive in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",

Review Comment:
   ```suggestion
           "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse (will run for 20 minutes).\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://BUCKET_NAME/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the dependencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://BUCKET_NAME/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse` transform, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrive in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",

Review Comment:
   ```suggestion
           "main_input_fire_interval = 60 # interval in seconds at which the main input PCollection is emitted.\n",
           "side_input_fire_interval = 60 # interval in seconds at which the side input PCollection is emitted.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://BUCKET_NAME/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the dependencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://BUCKET_NAME/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse` transform, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrive in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. To read and pre-process the images, use the `read_image` function. This example uses `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the Apache Beam pipeline. Use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://BUCKET_NAME/*.h5'`.\n",
+        "\n",
+        "  **How to watch for the automatic model update**\n",
+        "\n",
+        "  After the pipeline starts processing data and when you see output emitted from the RunInference `PTransform`, upload a `.h5` `TensorFlow` model (for example, [resnet_152](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5)) that matches the `file_pattern` to the Google Cloud Storage bucket. RunInference uses `WatchFilePattern` as a side input to update the `model_uri` of `TFModelHandlerTensor`.\n"

Review Comment:
   I think we should be opinionated and instruct them to use resnet_152. That way we can give exact instructions (download and upload to bucket)



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://BUCKET_NAME/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the dependencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://BUCKET_NAME/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse` transform, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrive in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. To read and pre-process the images, use the `read_image` function. This example uses `Cat-with-beanie.jpg` for all inferences."

Review Comment:
   Actually, embedding the image might be even better - it will make the notebook more visually appealing/interesting



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://BUCKET_NAME/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the dependencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://BUCKET_NAME/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse` transform, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrive in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. To read and pre-process the images, use the `read_image` function. This example uses `Cat-with-beanie.jpg` for all inferences."
+      ],
+      "metadata": {
+        "id": "8-sal2rFAxP2"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "image_data = (periodic_impulse | beam.Map(lambda x: \"Cat-with-beanie.jpg\")\n",
+        "      | \"ReadImage\" >> beam.Map(lambda image_name: read_image(\n",
+        "          image_name=image_name, image_dir='https://storage.googleapis.com/apache-beam-samples/image_captioning/')))"
+      ],
+      "metadata": {
+        "id": "dGg11TpV_aV6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "3. Pass the images to the RunInference `PTransform`. RunInference takes `model_handler` and `model_metadata_pcoll` as input parameters.\n",
+        "  * `model_metadata_pcoll` is a [side input](https://beam.apache.org/documentation/programming-guide/#side-inputs) `PCollection` to the RunInference `PTransform`. This side input is used to update the `model_uri` in the `model_handler` without needing to stop the Apache Beam pipeline. Use `WatchFilePattern` as side input to watch a `file_pattern` matching `.h5` files. In this case, the `file_pattern` is `'gs://BUCKET_NAME/*.h5'`.\n",
+        "\n",
+        "  **How to watch for the automatic model update**\n",
+        "\n",
+        "  After the pipeline starts processing data and when you see output emitted from the RunInference `PTransform`, upload a `.h5` `TensorFlow` model (for example, [resnet_152](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152_weights_tf_dim_ordering_tf_kernels.h5)) that matches the `file_pattern` to the Google Cloud Storage bucket. RunInference uses `WatchFilePattern` as a side input to update the `model_uri` of `TFModelHandlerTensor`.\n"

Review Comment:
   I also think we should move those instructions above the Run Pipeline step since that's logically when the user should do it. We can just have a callout of what this step is doing here instead (watching the bucket for updates)



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://BUCKET_NAME/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the dependencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://BUCKET_NAME/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse` transform, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrive in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. To read and pre-process the images, use the `read_image` function. This example uses `Cat-with-beanie.jpg` for all inferences."

Review Comment:
   Similar to https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_with_tensorflow_hub.ipynb you'll need a note on licensing



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://BUCKET_NAME/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the dependencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",

Review Comment:
   ```suggestion
           " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet as our initial model used for inference.\n",
   ```



##########
beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb:
##########
@@ -0,0 +1,456 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "include_colab_link": true
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    }
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "metadata": {
+        "id": "view-in-github",
+        "colab_type": "text"
+      },
+      "source": [
+        "<a href=\"https://colab.research.google.com/github/AnandInguva/beam/blob/notebook/beam/examples/notebooks/beam-ml/side_Input_model_updates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n",
+        "\n",
+        "# Licensed to the Apache Software Foundation (ASF) under one\n",
+        "# or more contributor license agreements. See the NOTICE file\n",
+        "# distributed with this work for additional information\n",
+        "# regarding copyright ownership. The ASF licenses this file\n",
+        "# to you under the Apache License, Version 2.0 (the\n",
+        "# \"License\"); you may not use this file except in compliance\n",
+        "# with the License. You may obtain a copy of the License at\n",
+        "#\n",
+        "#   http://www.apache.org/licenses/LICENSE-2.0\n",
+        "#\n",
+        "# Unless required by applicable law or agreed to in writing,\n",
+        "# software distributed under the License is distributed on an\n",
+        "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+        "# KIND, either express or implied. See the License for the\n",
+        "# specific language governing permissions and limitations\n",
+        "# under the License"
+      ],
+      "metadata": {
+        "cellView": "form",
+        "id": "OsFaZscKSPvo"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# Update ML models in running pipelines"
+      ],
+      "metadata": {
+        "id": "ZUSiAR62SgO8"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "The pipeline in this notebook uses a RunInference `PTransform` to run inference on images using TensorFlow models. To update the model, it uses a side input `PCollection` that emits `ModelMetadata`.\n",
+        "\n",
+        "You can use side inputs to update your model in real-time, even while the Apache Beam pipeline is running. The side input is passed in a `ModelHandler` configuration object. You can update the model either by leveraging one of Apache Beam's provided patterns, such as the `WatchFilePattern`, or by configuring a custom side input `PCollection` that defines the logic for the model update.\n",
+        "\n",
+        "For more information about side inputs, see the [Side inputs](https://beam.apache.org/documentation/programming-guide/#side-inputs) section in the Apache Beam Programming Guide.\n",
+        "\n",
+        "This example uses `WatchFilePattern` as a side input. `WatchFilePattern` is used to watch for the file updates matching the `file_pattern` based on timestamps. It emits the latest `ModelMetadata`, which is used in the RunInference `PTransform` to automatically update the ML model without stopping the Apache Beam pipeline.\n"
+      ],
+      "metadata": {
+        "id": "tBtqF5UpKJNZ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Before you begin\n",
+        "Install the dependencies required to run this notebook.\n",
+        "\n",
+        "To use RunInference with side inputs for automatic model updates, install `Apache Beam` version `2.46.0` or later."
+      ],
+      "metadata": {
+        "id": "SPuXFowiTpWx"
+      }
+    },
+    {
+      "cell_type": "code",
+      "execution_count": null,
+      "metadata": {
+        "id": "1RyTYsFEIOlA"
+      },
+      "outputs": [],
+      "source": [
+        "!pip install apache_beam[gcp]>=2.46.0 --quiet\n",
+        "!pip install tensorflow\n",
+        "!pip install tensorflow_hub"
+      ]
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Imports required for the notebook.\n",
+        "import logging\n",
+        "import time\n",
+        "from typing import Iterable\n",
+        "from typing import Tuple\n",
+        "\n",
+        "import apache_beam as beam\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import PostProcessor\n",
+        "from apache_beam.examples.inference.tensorflow_imagenet_segmentation import read_image\n",
+        "from apache_beam.ml.inference.base import PredictionResult\n",
+        "from apache_beam.ml.inference.base import RunInference\n",
+        "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor\n",
+        "from apache_beam.ml.inference.utils import WatchFilePattern\n",
+        "from apache_beam.options.pipeline_options import GoogleCloudOptions\n",
+        "from apache_beam.options.pipeline_options import PipelineOptions\n",
+        "from apache_beam.options.pipeline_options import SetupOptions\n",
+        "from apache_beam.options.pipeline_options import StandardOptions\n",
+        "from apache_beam.transforms.periodicsequence import PeriodicImpulse"
+      ],
+      "metadata": {
+        "id": "Rs4cwwNrIV9H"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Authenticate to your Google Cloud account.\n",
+        "from google.colab import auth\n",
+        "auth.authenticate_user()"
+      ],
+      "metadata": {
+        "id": "jAKpPcmmGm03"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Runner\n",
+        "\n",
+        "This pipeline runs on the Dataflow Runner. Ensure that you have all the required permissions to run the pipeline on Dataflow.\n",
+        "\n",
+        "Configure the pipeline options for the pipeline to run on Dataflow. Make sure the pipeline is using streaming mode."
+      ],
+      "metadata": {
+        "id": "ORYNKhH3WQyP"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "options = PipelineOptions()\n",
+        "options.view_as(StandardOptions).streaming = True\n",
+        "\n",
+        "# Provide required pipeline options for the Dataflow Runner.\n",
+        "options.view_as(StandardOptions).runner = \"DataflowRunner\"\n",
+        "\n",
+        "# Set the project to the default project in your current Google Cloud environment.\n",
+        "options.view_as(GoogleCloudOptions).project = 'your-project'\n",
+        "\n",
+        "# Set the Google Cloud region that you want to run Dataflow in.\n",
+        "options.view_as(GoogleCloudOptions).region = 'us-central1'\n",
+        "\n",
+        "# IMPORTANT: Update the following line to choose a Cloud Storage location.\n",
+        "dataflow_gcs_location = \"gs://BUCKET_NAME/tmp/\"\n",
+        "\n",
+        "# The Dataflow staging location. This location is used to stage the Dataflow pipeline and the SDK binary.\n",
+        "options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location\n",
+        "\n",
+        "# The Dataflow temp location. This location is used to store temporary files or intermediate results before outputting to the sink.\n",
+        "options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location\n",
+        "\n"
+      ],
+      "metadata": {
+        "id": "wWjbnq6X-4uE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Install the `tensorflow` and `tensorflow_hub` dependencies on Dataflow. Use the `requirements_file` pipeline option to pass these dependencies."
+      ],
+      "metadata": {
+        "id": "HTJV8pO2Wcw4"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# In a requirements file, define the dependencies required for the pipeline.\n",
+        "deps_required_for_pipeline = ['tensorflow>=2.12.0', 'tensorflow-hub>=0.10.0', 'Pillow>=9.0.0']\n",
+        "requirements_file_path = './requirements.txt'\n",
+        "# Write the dependencies to the requirements file.\n",
+        "with open(requirements_file_path, 'w') as f:\n",
+        "  for dep in deps_required_for_pipeline:\n",
+        "    f.write(dep + '\\n')\n",
+        "\n",
+        "# Install the pipeline dependencies on Dataflow.\n",
+        "options.view_as(SetupOptions).requirements_file = requirements_file_path"
+      ],
+      "metadata": {
+        "id": "lEy4PkluWbdm"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## TensorFlow ModelHandler\n",
+        " This example uses `TFModelHandlerTensor` as the model handler and the `resnet_101` model trained on imagenet.\n",
+        "\n",
+        " Download the model from [Google Cloud Storage](https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet101_weights_tf_dim_ordering_tf_kernels.h5) (link downloads the model), and place it in the directory that you want to use to update your model."
+      ],
+      "metadata": {
+        "id": "_AUNH_GJk_NE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "model_handler = TFModelHandlerTensor(\n",
+        "    model_uri=\"gs://BUCKET_NAME/resnet101_weights_tf_dim_ordering_tf_kernels.h5\")"
+      ],
+      "metadata": {
+        "id": "kkSnsxwUk-Sp"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Pre-process images\n",
+        "\n",
+        "Use `preprocess_image` to run the inference, read the image, and convert the image to a TensorFlow tensor."
+      ],
+      "metadata": {
+        "id": "tZH0r0sL-if5"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "def preprocess_image(image_name, image_dir):\n",
+        "  img = tf.keras.utils.get_file(image_name, image_dir + image_name)\n",
+        "  img = Image.open(img).resize((224, 224))\n",
+        "  img = numpy.array(img) / 255.0\n",
+        "  img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)\n",
+        "  return img_tensor"
+      ],
+      "metadata": {
+        "id": "dU5imgTt-8Ne"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "class PostProcessor(beam.DoFn):\n",
+        "  \"\"\"Process the PredictionResult to get the predicted label.\n",
+        "  Returns predicted label.\n",
+        "  \"\"\"\n",
+        "  def process(self, element: PredictionResult) -> Iterable[Tuple[str, str]]:\n",
+        "    predicted_class = numpy.argmax(element.inference, axis=-1)\n",
+        "    labels_path = tf.keras.utils.get_file(\n",
+        "        'ImageNetLabels.txt',\n",
+        "        'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'  # pylint: disable=line-too-long\n",
+        "    )\n",
+        "    imagenet_labels = numpy.array(open(labels_path).read().splitlines())\n",
+        "    predicted_class_name = imagenet_labels[predicted_class]\n",
+        "    yield predicted_class_name.title(), element.model_id"
+      ],
+      "metadata": {
+        "id": "6V5tJxO6-gyt"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Define the pipeline object.\n",
+        "pipeline = beam.Pipeline(options=options)"
+      ],
+      "metadata": {
+        "id": "GpdKk72O_NXT"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "Next, review the pipeline steps and examine the code.\n",
+        "\n",
+        "### Pipeline steps\n"
+      ],
+      "metadata": {
+        "id": "elZ53uxc_9Hv"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "1. Create a `PeriodicImpulse` transform, which emits output every `n` seconds. The `PeriodicImpulse` transform generates an infinite sequence of elements with a given runtime interval.\n",
+        "\n",
+        "  In this example, `PeriodicImpulse` mimics the Pub/Sub source. Because the inputs in a streaming pipeline arrive in intervals, use `PeriodicImpulse` to output elements at `m` intervals.\n",
+        "\n",
+        "To learn more about `PeriodicImpulse`, see the [`PeriodicImpulse` code](https://github.com/apache/beam/blob/9c52e0594d6f0e59cd17ee005acfb41da508e0d5/sdks/python/apache_beam/transforms/periodicsequence.py#L150)."
+      ],
+      "metadata": {
+        "id": "305tkV2sAD-S"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "start_timestamp = time.time() # start timestamp of the periodic impulse\n",
+        "end_timestamp = start_timestamp + 60 * 20 # end timestamp of the periodic impulse.\n",
+        "main_input_fire_interval = 60 # interval at which the main input PCollection is emitted.\n",
+        "side_input_fire_interval = 60 # interval at which the side input PCollection is emitted.\n",
+        "\n",
+        "periodic_impulse = (\n",
+        "      pipeline\n",
+        "      | \"MainInputPcoll\" >> PeriodicImpulse(\n",
+        "          start_timestamp=start_timestamp,\n",
+        "          stop_timestamp=end_timestamp,\n",
+        "          fire_interval=main_input_fire_interval)"
+      ],
+      "metadata": {
+        "id": "vUFStz66_Tbb"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "2. To read and pre-process the images, use the `read_image` function. This example uses `Cat-with-beanie.jpg` for all inferences."

Review Comment:
   Please add a link to the picture (can just be in the bucket)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on pull request #26048: Auto model updates notebook

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on PR #26048:
URL: https://github.com/apache/beam/pull/26048#issuecomment-1499374796

   @damccorm PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org