You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by "damccorm (via GitHub)" <gi...@apache.org> on 2023/08/31 14:55:54 UTC

[GitHub] [beam] damccorm opened a new pull request, #28263: Add support for limiting number of models in memory

damccorm opened a new pull request, #28263:
URL: https://github.com/apache/beam/pull/28263

   This will allow users to set an upper bound on the number of models in memory to help avoid OOMs. See https://docs.google.com/document/d/1kj3FyWRbJu1KhViX07Z0Gk0MU0842jhYRhI-DMhhcv4/edit#heading=h.iaih85l3wghw for more detail
   
   Part of https://github.com/apache/beam/issues/27628
   
   ------------------------
   
   Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
   
    - [ ] Mention the appropriate issue in your description (for example: `addresses #123`), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment `fixes #<ISSUE NUMBER>` instead.
    - [ ] Update `CHANGES.md` with noteworthy changes.
    - [ ] If this contribution is large, please file an Apache [Individual Contributor License Agreement](https://www.apache.org/licenses/icla.pdf).
   
   See the [Contributor Guide](https://beam.apache.org/contribute) for more tips on [how to make review process smoother](https://github.com/apache/beam/blob/master/CONTRIBUTING.md#make-the-reviewers-job-easier).
   
   To check the build health, please visit [https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md](https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md)
   
   GitHub Actions Tests Status (on master branch)
   ------------------------------------------------------------------------------------------------
   [![Build python source distribution and wheels](https://github.com/apache/beam/workflows/Build%20python%20source%20distribution%20and%20wheels/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Build+python+source+distribution+and+wheels%22+branch%3Amaster+event%3Aschedule)
   [![Python tests](https://github.com/apache/beam/workflows/Python%20tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Python+Tests%22+branch%3Amaster+event%3Aschedule)
   [![Java tests](https://github.com/apache/beam/workflows/Java%20Tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Java+Tests%22+branch%3Amaster+event%3Aschedule)
   [![Go tests](https://github.com/apache/beam/workflows/Go%20tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Go+tests%22+branch%3Amaster+event%3Aschedule)
   
   See [CI.md](https://github.com/apache/beam/blob/master/CI.md) for more information about GitHub Actions CI or the [workflows README](https://github.com/apache/beam/blob/master/.github/workflows/README.md) to see a list of phrases to trigger workflows.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] github-actions[bot] commented on pull request #28263: Add support for limiting number of models in memory

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on PR #28263:
URL: https://github.com/apache/beam/pull/28263#issuecomment-1701420346

   Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on pull request #28263: Add support for limiting number of models in memory

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on PR #28263:
URL: https://github.com/apache/beam/pull/28263#issuecomment-1701418351

   R: @riteshghorse 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on pull request #28263: Add support for limiting number of models in memory

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on PR #28263:
URL: https://github.com/apache/beam/pull/28263#issuecomment-1701416984

   Looks like error in last precommit was due to low memory. Should be solved by moving to GHA (in general, seems like GHA has been less flaky in my recent experience)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #28263: Add support for limiting number of models in memory

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #28263:
URL: https://github.com/apache/beam/pull/28263#discussion_r1311974702


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -308,17 +308,13 @@ class _ModelManager:
   parameter, if that is set it will only hold that many models in memory at
   once before evicting one (using LRU logic).

Review Comment:
   Done, thanks!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] codecov[bot] commented on pull request #28263: Add support for limiting number of models in memory

Posted by "codecov[bot] (via GitHub)" <gi...@apache.org>.
codecov[bot] commented on PR #28263:
URL: https://github.com/apache/beam/pull/28263#issuecomment-1701454463

   ## [Codecov](https://app.codecov.io/gh/apache/beam/pull/28263?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) Report
   > Merging [#28263](https://app.codecov.io/gh/apache/beam/pull/28263?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) (d561cbf) into [master](https://app.codecov.io/gh/apache/beam/commit/205083dd72fd53663d865c0fb5752cc3dfdf428a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) (205083d) will **increase** coverage by `0.08%`.
   > Report is 13 commits behind head on master.
   > The diff coverage is `100.00%`.
   
   ```diff
   @@            Coverage Diff             @@
   ##           master   #28263      +/-   ##
   ==========================================
   + Coverage   72.28%   72.37%   +0.08%     
   ==========================================
     Files         678      679       +1     
     Lines       99899   100160     +261     
   ==========================================
   + Hits        72215    72493     +278     
   + Misses      26117    26100      -17     
     Partials     1567     1567              
   ```
   
   | [Flag](https://app.codecov.io/gh/apache/beam/pull/28263/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | Coverage Δ | |
   |---|---|---|
   | [python](https://app.codecov.io/gh/apache/beam/pull/28263/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `82.94% <100.00%> (+0.09%)` | :arrow_up: |
   
   Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache#carryforward-flags-in-the-pull-request-comment) to find out more.
   
   | [Files Changed](https://app.codecov.io/gh/apache/beam/pull/28263?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | Coverage Δ | |
   |---|---|---|
   | [sdks/python/apache\_beam/ml/inference/base.py](https://app.codecov.io/gh/apache/beam/pull/28263?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vbWwvaW5mZXJlbmNlL2Jhc2UucHk=) | `93.51% <100.00%> (+0.15%)` | :arrow_up: |
   
   ... and [13 files with indirect coverage changes](https://app.codecov.io/gh/apache/beam/pull/28263/indirect-changes?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache)
   
   :mega: We’re building smart automated test selection to slash your CI/CD build times. [Learn more](https://about.codecov.io/iterative-testing/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] riteshghorse commented on a diff in pull request #28263: Add support for limiting number of models in memory

Posted by "riteshghorse (via GitHub)" <gi...@apache.org>.
riteshghorse commented on code in PR #28263:
URL: https://github.com/apache/beam/pull/28263#discussion_r1311980103


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -587,6 +588,14 @@ def run_inference(
           keys,
           self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
 
+    # The first time a MultiProcessShared ModelManager is used for inference
+    # from this process, we should increment its max model count
+    if self._max_models_per_worker_hint is not None:
+      lock = threading.Lock()
+      if lock.acquire(blocking=False):

Review Comment:
   got it, thanks!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on pull request #28263: Add support for limiting number of models in memory

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on PR #28263:
URL: https://github.com/apache/beam/pull/28263#issuecomment-1701523451

   Run Python PreCommit


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] riteshghorse commented on a diff in pull request #28263: Add support for limiting number of models in memory

Posted by "riteshghorse (via GitHub)" <gi...@apache.org>.
riteshghorse commented on code in PR #28263:
URL: https://github.com/apache/beam/pull/28263#discussion_r1311970930


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -308,17 +308,13 @@ class _ModelManager:
   parameter, if that is set it will only hold that many models in memory at
   once before evicting one (using LRU logic).

Review Comment:
   remove doc statement for `max_models`



##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -587,6 +588,14 @@ def run_inference(
           keys,
           self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
 
+    # The first time a MultiProcessShared ModelManager is used for inference
+    # from this process, we should increment its max model count
+    if self._max_models_per_worker_hint is not None:
+      lock = threading.Lock()
+      if lock.acquire(blocking=False):

Review Comment:
   curious why are we not using a blocking lock?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on pull request #28263: Add support for limiting number of models in memory

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on PR #28263:
URL: https://github.com/apache/beam/pull/28263#issuecomment-1701415983

   Run Python_Coverage PreCommit


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #28263: Add support for limiting number of models in memory

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #28263:
URL: https://github.com/apache/beam/pull/28263#discussion_r1311977323


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -587,6 +588,14 @@ def run_inference(
           keys,
           self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
 
+    # The first time a MultiProcessShared ModelManager is used for inference
+    # from this process, we should increment its max model count
+    if self._max_models_per_worker_hint is not None:
+      lock = threading.Lock()
+      if lock.acquire(blocking=False):

Review Comment:
   The goal is to run this exactly once per process. So the idea is that in a given process, a single thread will acquire the lock and increment the max models. Then the remaining threads (or the original thread in the future) will try to acquire the lock and fail, so they will not increment the value.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm merged pull request #28263: Add support for limiting number of models in memory

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm merged PR #28263:
URL: https://github.com/apache/beam/pull/28263


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org