You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by "AnandInguva (via GitHub)" <gi...@apache.org> on 2023/02/06 04:12:50 UTC

[GitHub] [beam] AnandInguva opened a new pull request, #25321: Add support for loading torchscript models

AnandInguva opened a new pull request, #25321:
URL: https://github.com/apache/beam/pull/25321

   **Please** add a meaningful description for your change here
   
   ------------------------
   
   Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
   
    - [ ] Mention the appropriate issue in your description (for example: `addresses #123`), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment `fixes #<ISSUE NUMBER>` instead.
    - [ ] Update `CHANGES.md` with noteworthy changes.
    - [ ] If this contribution is large, please file an Apache [Individual Contributor License Agreement](https://www.apache.org/licenses/icla.pdf).
   
   See the [Contributor Guide](https://beam.apache.org/contribute) for more tips on [how to make review process smoother](https://beam.apache.org/contribute/get-started-contributing/#make-the-reviewers-job-easier).
   
   To check the build health, please visit [https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md](https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md)
   
   GitHub Actions Tests Status (on master branch)
   ------------------------------------------------------------------------------------------------
   [![Build python source distribution and wheels](https://github.com/apache/beam/workflows/Build%20python%20source%20distribution%20and%20wheels/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Build+python+source+distribution+and+wheels%22+branch%3Amaster+event%3Aschedule)
   [![Python tests](https://github.com/apache/beam/workflows/Python%20tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Python+Tests%22+branch%3Amaster+event%3Aschedule)
   [![Java tests](https://github.com/apache/beam/workflows/Java%20Tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Java+Tests%22+branch%3Amaster+event%3Aschedule)
   [![Go tests](https://github.com/apache/beam/workflows/Go%20tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Go+tests%22+branch%3Amaster+event%3Aschedule)
   
   See [CI.md](https://github.com/apache/beam/blob/master/CI.md) for more information about GitHub Actions CI.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] codecov[bot] commented on pull request #25321: Add support for loading torchscript models

Posted by "codecov[bot] (via GitHub)" <gi...@apache.org>.
codecov[bot] commented on PR #25321:
URL: https://github.com/apache/beam/pull/25321#issuecomment-1418499202

   # [Codecov](https://codecov.io/gh/apache/beam/pull/25321?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#25321](https://codecov.io/gh/apache/beam/pull/25321?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (caddaca) into [master](https://codecov.io/gh/apache/beam/commit/16cb63be7e09230b30f582844549328feaa70168?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (16cb63b) will **decrease** coverage by `0.02%`.
   > The diff coverage is `0.00%`.
   
   ```diff
   @@            Coverage Diff             @@
   ##           master   #25321      +/-   ##
   ==========================================
   - Coverage   72.95%   72.93%   -0.02%     
   ==========================================
     Files         745      745              
     Lines       99174    99188      +14     
   ==========================================
   - Hits        72356    72347       -9     
   - Misses      25453    25476      +23     
     Partials     1365     1365              
   ```
   
   | Flag | Coverage Δ | |
   |---|---|---|
   | python | `82.43% <0.00%> (-0.03%)` | :arrow_down: |
   
   Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
   
   | [Impacted Files](https://codecov.io/gh/apache/beam/pull/25321?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...thon/apache\_beam/ml/inference/pytorch\_inference.py](https://codecov.io/gh/apache/beam/pull/25321?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vbWwvaW5mZXJlbmNlL3B5dG9yY2hfaW5mZXJlbmNlLnB5) | `0.00% <0.00%> (ø)` | |
   | [sdks/python/apache\_beam/io/localfilesystem.py](https://codecov.io/gh/apache/beam/pull/25321?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vaW8vbG9jYWxmaWxlc3lzdGVtLnB5) | `90.97% <0.00%> (-0.76%)` | :arrow_down: |
   | [...eam/runners/portability/fn\_api\_runner/execution.py](https://codecov.io/gh/apache/beam/pull/25321?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vcnVubmVycy9wb3J0YWJpbGl0eS9mbl9hcGlfcnVubmVyL2V4ZWN1dGlvbi5weQ==) | `92.49% <0.00%> (-0.64%)` | :arrow_down: |
   | [...ython/apache\_beam/io/gcp/bigquery\_read\_internal.py](https://codecov.io/gh/apache/beam/pull/25321?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vaW8vZ2NwL2JpZ3F1ZXJ5X3JlYWRfaW50ZXJuYWwucHk=) | `57.21% <0.00%> (-0.49%)` | :arrow_down: |
   | [sdks/python/apache\_beam/io/gcp/bigquery.py](https://codecov.io/gh/apache/beam/pull/25321?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vaW8vZ2NwL2JpZ3F1ZXJ5LnB5) | `71.11% <0.00%> (-0.24%)` | :arrow_down: |
   | [...hon/apache\_beam/runners/worker/bundle\_processor.py](https://codecov.io/gh/apache/beam/pull/25321?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vcnVubmVycy93b3JrZXIvYnVuZGxlX3Byb2Nlc3Nvci5weQ==) | `93.42% <0.00%> (-0.13%)` | :arrow_down: |
   | [...ks/python/apache\_beam/runners/worker/sdk\_worker.py](https://codecov.io/gh/apache/beam/pull/25321?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vcnVubmVycy93b3JrZXIvc2RrX3dvcmtlci5weQ==) | `89.24% <0.00%> (ø)` | |
   
   :mega: We’re building smart automated test selection to slash your CI/CD build times. [Learn more](https://about.codecov.io/iterative-testing/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1102921175


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -81,11 +81,8 @@ def _validate_constructor_args(
 
 
 def _load_model(
-    model_class: Optional[Callable[..., torch.nn.Module]],
-    state_dict_path: Optional[str],
-    device: torch.device,
-    model_params: Optional[Dict[str, Any]],
-    torch_script_model_path: Optional[str]):
+    model_class, state_dict_path, device, model_params,
+    torch_script_model_path):

Review Comment:
   since model_params accepts `Optional[Dict[str, Any]]`, we do a check in the model handler constructor to check if it is None and replace None with `{}`. then we pass the model_params, which is always a `Dict` to `_load_model`
   
   Here 
   1. In the _load_model, we still need to provide the type annotation as `Optional[Dict[str, Any]]` instead of Dict[str, Any](even though we know it would always be a dict) but `mypy` cannot deduce that. 
   2. If I provide `Optional[Dict[str, Any]]`, then `mypy` thinks it could be `None` sometimes and the type checker fails at the code `**model_params`, saying `after **, a Map is expected`. We know it would be always a Map but we need to match type of the `_load_model`'s `model_params` with upstream `model_params` type. 
   
   Either way, we get an error. not really our bug. So I removed the type hints since this is an internal function and we anyway actually provide typehints in the upstream methods that use `_load_model`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on PR #25321:
URL: https://github.com/apache/beam/pull/25321#issuecomment-1425884689

   @damccorm The tests pass but for some reason they are not updated in UI. PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on PR #25321:
URL: https://github.com/apache/beam/pull/25321#issuecomment-1426317161

   LGTM pending checks passing


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm merged pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm merged PR #25321:
URL: https://github.com/apache/beam/pull/25321


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1102921175


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -81,11 +81,8 @@ def _validate_constructor_args(
 
 
 def _load_model(
-    model_class: Optional[Callable[..., torch.nn.Module]],
-    state_dict_path: Optional[str],
-    device: torch.device,
-    model_params: Optional[Dict[str, Any]],
-    torch_script_model_path: Optional[str]):
+    model_class, state_dict_path, device, model_params,
+    torch_script_model_path):

Review Comment:
   since model_params accepts `Optional[Dict[str, Any]]`, we do a check in the model handler constructor to check if it is None and replace None with `{}`. then we pass the model_params, which is always a `Dict` to `_load_model`
   
   Here 
   1. In the _load_model, we still need to provide the type annotation as `Optional[Dict[str, Any]]` instead of `Dict[str, Any]`(even though we know it would always be a `dict`) but `mypy` cannot deduce that. 
   2. If I provide `Optional[Dict[str, Any]]`, then `mypy` thinks it could be `None` sometimes and the type checker fails at the code `**model_params`, saying `after **, a Map is expected`. We know it would be always a Map but we need to match type of the `_load_model`'s `model_params` with upstream `model_params` type. 
   
   Either way, we get an error. not really our bug. So I removed the type hints since this is an internal function and we anyway actually provide typehints in the upstream methods that use `_load_model`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1102921175


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -81,11 +81,8 @@ def _validate_constructor_args(
 
 
 def _load_model(
-    model_class: Optional[Callable[..., torch.nn.Module]],
-    state_dict_path: Optional[str],
-    device: torch.device,
-    model_params: Optional[Dict[str, Any]],
-    torch_script_model_path: Optional[str]):
+    model_class, state_dict_path, device, model_params,
+    torch_script_model_path):

Review Comment:
   since model_params accepts `Optional[Dict[str, Any]]` in the model handler constructor, we do a check in the model handler constructor to check if it is `None` and replace `None` with `{}`. then we pass the model_params, which is always a `Dict` to `_load_model`
   
   Here 
   1. In the _load_model, we still need to provide the type annotation as `Optional[Dict[str, Any]]` instead of `Dict[str, Any]`(even though we know it would always be a `dict`) but `mypy` cannot deduce that. 
   2. If I provide `Optional[Dict[str, Any]]`, then `mypy` thinks it could be `None` sometimes and the type checker fails at the code `**model_params`, saying `after **, a Map is expected`. We know it would be always a Map but we need to match type of the `_load_model`'s `model_params` with upstream `model_params` type. 
   
   Either way, we get an error. not really our bug. So I removed the type hints since this is an internal function and we anyway actually provide typehints in the upstream methods that use `_load_model`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1102973681


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -81,11 +81,8 @@ def _validate_constructor_args(
 
 
 def _load_model(
-    model_class: Optional[Callable[..., torch.nn.Module]],
-    state_dict_path: Optional[str],
-    device: torch.device,
-    model_params: Optional[Dict[str, Any]],
-    torch_script_model_path: Optional[str]):
+    model_class, state_dict_path, device, model_params,
+    torch_script_model_path):

Review Comment:
   Other alternative, I can add all type hints and add ignore for mypy on the error line with a comment



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1103192572


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -174,6 +174,12 @@ def update_model_path(self, model_path: Optional[str] = None):
     """Update the model paths produced by side inputs."""
     pass
 
+  def validate_constructor_args(self):

Review Comment:
   Sgtm. I will make changes



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1098766011


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -174,6 +186,9 @@ def __init__(
         Otherwise, it will be CPU.
       inference_fn: the inference function to use during RunInference.
         default=_default_tensor_inference_fn
+      use_torch_script_format: When `use_torch_script_format` is set to `True`,
+        the model will be loaded using `torch.jit.load()`.
+        `model_class` and `model_params` arguments will be disregarded.

Review Comment:
   > I think the use_torch_script_format is a wrong name. I can rename to something like load_as_torchscript_model or any suggestions?
   
   Maybe just `is_torchscript`? An alternate approach would be to take a `torchscript_model_path` parameter. So instead of a bool, it would take a string and we would require `state_dict_path` and `model_class` to be `None`. I might lean towards that approach.
   
   > When user enables this, we call torch.jit.load(), which accepts .pt, .pth and also the new zip format torch is going to add as default soon https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-for-inference
   
   This still needs to be saved with the torchscript format, though, right? It would not work with `.pt`/`.pth` file that just contains the state_dict



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1103166098


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -174,6 +174,12 @@ def update_model_path(self, model_path: Optional[str] = None):
     """Update the model paths produced by side inputs."""
     pass
 
+  def validate_constructor_args(self):

Review Comment:
   Ah, I just meant to share it between PyTorch model handlers. Sorry, that was unclear. I think a single shared _validate_constructor_args function is enough here. Thanks for bearing with me here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1102921175


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -81,11 +81,8 @@ def _validate_constructor_args(
 
 
 def _load_model(
-    model_class: Optional[Callable[..., torch.nn.Module]],
-    state_dict_path: Optional[str],
-    device: torch.device,
-    model_params: Optional[Dict[str, Any]],
-    torch_script_model_path: Optional[str]):
+    model_class, state_dict_path, device, model_params,
+    torch_script_model_path):

Review Comment:
   since model_params accepts `Optional[Dict[str, Any]]` in the model handler constructor, we do a check in the model handler constructor to check if it is None and replace None with `{}`. then we pass the model_params, which is always a `Dict` to `_load_model`
   
   Here 
   1. In the _load_model, we still need to provide the type annotation as `Optional[Dict[str, Any]]` instead of `Dict[str, Any]`(even though we know it would always be a `dict`) but `mypy` cannot deduce that. 
   2. If I provide `Optional[Dict[str, Any]]`, then `mypy` thinks it could be `None` sometimes and the type checker fails at the code `**model_params`, saying `after **, a Map is expected`. We know it would be always a Map but we need to match type of the `_load_model`'s `model_params` with upstream `model_params` type. 
   
   Either way, we get an error. not really our bug. So I removed the type hints since this is an internal function and we anyway actually provide typehints in the upstream methods that use `_load_model`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1099236711


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -174,6 +186,9 @@ def __init__(
         Otherwise, it will be CPU.
       inference_fn: the inference function to use during RunInference.
         default=_default_tensor_inference_fn
+      use_torch_script_format: When `use_torch_script_format` is set to `True`,
+        the model will be loaded using `torch.jit.load()`.
+        `model_class` and `model_params` arguments will be disregarded.

Review Comment:
   I like the approach `torchscript_model_path`. I will change the code. Thanks



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on PR #25321:
URL: https://github.com/apache/beam/pull/25321#issuecomment-1424747883

   > LGTM, I'll merge once checks pass
   
   I think there are some lint issues(not clear why they appear). I will tackle them tomorrow and will ping you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1102981057


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -81,11 +81,8 @@ def _validate_constructor_args(
 
 
 def _load_model(
-    model_class: Optional[Callable[..., torch.nn.Module]],
-    state_dict_path: Optional[str],
-    device: torch.device,
-    model_params: Optional[Dict[str, Any]],
-    torch_script_model_path: Optional[str]):
+    model_class, state_dict_path, device, model_params,
+    torch_script_model_path):

Review Comment:
   Yeah, that's probably actually a better path forward - thanks



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on PR #25321:
URL: https://github.com/apache/beam/pull/25321#issuecomment-1426316940

   Run Whitespace PreCommit


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1097882174


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -174,6 +186,9 @@ def __init__(
         Otherwise, it will be CPU.
       inference_fn: the inference function to use during RunInference.
         default=_default_tensor_inference_fn
+      use_torch_script_format: When `use_torch_script_format` is set to `True`,
+        the model will be loaded using `torch.jit.load()`.
+        `model_class` and `model_params` arguments will be disregarded.

Review Comment:
   I will update the doc string to have `torch.jit.load(state_dict_path)`



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -174,6 +186,9 @@ def __init__(
         Otherwise, it will be CPU.
       inference_fn: the inference function to use during RunInference.
         default=_default_tensor_inference_fn
+      use_torch_script_format: When `use_torch_script_format` is set to `True`,
+        the model will be loaded using `torch.jit.load()`.
+        `model_class` and `model_params` arguments will be disregarded.

Review Comment:
   I think the `use_torch_script_format` is a wrong name.  I can rename to something like `load_as_torchscript_model` or any suggestions?
   
   When user enables this, we call `torch.jit.load()`, which accepts `.pt`, `.pth` and also the new `zip` format torch is going to add as default soon https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-for-inference



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -71,18 +73,26 @@ def _load_model(
   try:
     logging.info(
         "Loading state_dict_path %s onto a %s device", state_dict_path, device)
-    state_dict = torch.load(file, map_location=device)
+    if not use_torch_script_format:

Review Comment:
   >> My inclination after thinking more is that we should not do this since (a) it gives the users less safety from accidental errors, and (b) it is potentially constraining if we accept loading new model types that don't use state_dict_path or model_class in the future.
   
   Is this for the comment to make `model_class` as None and inferring from it? or is it regarding adding the parameter `use_torch_script_format`?



##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -609,6 +609,57 @@ def test_gpu_auto_convert_to_cpu(self):
           "are not available. Switching to CPU.",
           log.output)
 
+  def test_load_torch_script_model(self):
+    torch_model = PytorchLinearRegression(2, 1)
+    torch_script_model = torch.jit.script(torch_model)
+
+    torch_script_path = os.path.join(self.tmpdir, 'torch_script_model.pt')
+
+    torch.jit.save(torch_script_model, torch_script_path)

Review Comment:
   These are pretty light models and takes ~3 seconds to run. 
   
   Also, the disadvantage of saving the models in GCS bucket for unittests is that we may need to manually update them when the pytorch future version becomes incompatible in any case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1101716682


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -201,20 +207,30 @@ def __init__(
       logging.info("Device is set to CPU")
       self._device = torch.device('cpu')
     self._model_class = model_class
-    self._model_params = model_params
+    self._model_params = model_params if model_params else {}
     self._inference_fn = inference_fn
-    self._use_torch_script_format = use_torch_script_format
+    self._batching_kwargs = {}
+    if min_batch_size is not None:
+      self._batching_kwargs['min_batch_size'] = min_batch_size
+    if max_batch_size is not None:
+      self._batching_kwargs['max_batch_size'] = max_batch_size
+    self._torch_script_model_path = torch_script_model_path
 
-    self._validate_func_args()
+    self.validate_constructor_args()
 
-  def _validate_func_args(self):
-    if not self._use_torch_script_format and (self._model_class is None or
-                                              self._model_params is None):
+  def validate_constructor_args(self):
+    if self._state_dict_path and not self._model_class:

Review Comment:
   ```suggestion
       if bool(self._state_dict_path) != bool(self._model_class):
   ```
   
   or alternatively:
   
   ```suggestion
       if (self._state_dict_path and not self._model_class) or (not self._state_dict_path and self._model_class):
   ```
   
   We just need to catch the case where they provide the model class but not the state_dict_path



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -400,11 +430,30 @@ def __init__(
       logging.info("Device is set to CPU")
       self._device = torch.device('cpu')
     self._model_class = model_class
-    self._model_params = model_params
+    self._model_params = model_params if model_params else {}
     self._inference_fn = inference_fn
-    self._use_torch_script_format = use_torch_script_format
+    self._batching_kwargs = {}
+    if min_batch_size is not None:
+      self._batching_kwargs['min_batch_size'] = min_batch_size
+    if max_batch_size is not None:
+      self._batching_kwargs['max_batch_size'] = max_batch_size
+    self._torch_script_model_path = torch_script_model_path
+
+    self.validate_constructor_args()
+
+  def validate_constructor_args(self):

Review Comment:
   Can we share this function as `_validate_constructor_args` (like `_load_model`)



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -201,20 +207,30 @@ def __init__(
       logging.info("Device is set to CPU")
       self._device = torch.device('cpu')
     self._model_class = model_class
-    self._model_params = model_params
+    self._model_params = model_params if model_params else {}
     self._inference_fn = inference_fn
-    self._use_torch_script_format = use_torch_script_format
+    self._batching_kwargs = {}
+    if min_batch_size is not None:
+      self._batching_kwargs['min_batch_size'] = min_batch_size
+    if max_batch_size is not None:
+      self._batching_kwargs['max_batch_size'] = max_batch_size
+    self._torch_script_model_path = torch_script_model_path
 
-    self._validate_func_args()
+    self.validate_constructor_args()
 
-  def _validate_func_args(self):
-    if not self._use_torch_script_format and (self._model_class is None or
-                                              self._model_params is None):
+  def validate_constructor_args(self):
+    if self._state_dict_path and not self._model_class:
       raise RuntimeError(
-          "Please pass both `model_class` and `model_params` to the torch "
-          "model handler when using it with PyTorch. "
-          "If you opt to load the entire that was saved using TorchScript, "
-          "set `use_torch_script_format` to True.")
+          "A state_dict_path has been supplied to the model "
+          "handler, but the required model_class is missing. "
+          "Please provide the model_class in order to "
+          "successfully load the state_dict_path.")
+
+    if self._torch_script_model_path:
+      if self._state_dict_path and self._model_class:

Review Comment:
   ```suggestion
         if self._state_dict_path or self._model_class:
   ```



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -400,11 +430,30 @@ def __init__(
       logging.info("Device is set to CPU")
       self._device = torch.device('cpu')
     self._model_class = model_class
-    self._model_params = model_params
+    self._model_params = model_params if model_params else {}
     self._inference_fn = inference_fn
-    self._use_torch_script_format = use_torch_script_format
+    self._batching_kwargs = {}
+    if min_batch_size is not None:
+      self._batching_kwargs['min_batch_size'] = min_batch_size
+    if max_batch_size is not None:
+      self._batching_kwargs['max_batch_size'] = max_batch_size
+    self._torch_script_model_path = torch_script_model_path
+
+    self.validate_constructor_args()
+
+  def validate_constructor_args(self):

Review Comment:
   If you're planning on doing something like that in your follow up refactor pr that you mentioned that is fine as well.



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -201,20 +207,30 @@ def __init__(
       logging.info("Device is set to CPU")
       self._device = torch.device('cpu')
     self._model_class = model_class
-    self._model_params = model_params
+    self._model_params = model_params if model_params else {}
     self._inference_fn = inference_fn
-    self._use_torch_script_format = use_torch_script_format
+    self._batching_kwargs = {}
+    if min_batch_size is not None:
+      self._batching_kwargs['min_batch_size'] = min_batch_size
+    if max_batch_size is not None:
+      self._batching_kwargs['max_batch_size'] = max_batch_size
+    self._torch_script_model_path = torch_script_model_path
 
-    self._validate_func_args()
+    self.validate_constructor_args()
 
-  def _validate_func_args(self):
-    if not self._use_torch_script_format and (self._model_class is None or
-                                              self._model_params is None):
+  def validate_constructor_args(self):
+    if self._state_dict_path and not self._model_class:
       raise RuntimeError(
-          "Please pass both `model_class` and `model_params` to the torch "
-          "model handler when using it with PyTorch. "
-          "If you opt to load the entire that was saved using TorchScript, "
-          "set `use_torch_script_format` to True.")
+          "A state_dict_path has been supplied to the model "
+          "handler, but the required model_class is missing. "
+          "Please provide the model_class in order to "
+          "successfully load the state_dict_path.")
+
+    if self._torch_script_model_path:
+      if self._state_dict_path and self._model_class:

Review Comment:
   (shouldn't really matter if we check the above case)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1101906346


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -400,11 +430,30 @@ def __init__(
       logging.info("Device is set to CPU")
       self._device = torch.device('cpu')
     self._model_class = model_class
-    self._model_params = model_params
+    self._model_params = model_params if model_params else {}
     self._inference_fn = inference_fn
-    self._use_torch_script_format = use_torch_script_format
+    self._batching_kwargs = {}
+    if min_batch_size is not None:
+      self._batching_kwargs['min_batch_size'] = min_batch_size
+    if max_batch_size is not None:
+      self._batching_kwargs['max_batch_size'] = max_batch_size
+    self._torch_script_model_path = torch_script_model_path
+
+    self.validate_constructor_args()
+
+  def validate_constructor_args(self):

Review Comment:
   Yes, I will take care of refactoring in the next PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1102963348


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -81,11 +81,8 @@ def _validate_constructor_args(
 
 
 def _load_model(
-    model_class: Optional[Callable[..., torch.nn.Module]],
-    state_dict_path: Optional[str],
-    device: torch.device,
-    model_params: Optional[Dict[str, Any]],
-    torch_script_model_path: Optional[str]):
+    model_class, state_dict_path, device, model_params,
+    torch_script_model_path):

Review Comment:
   That's fine by me, can we leave the other type hints though? (we may still need to exclude state_dict_path for the same reason)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1097855765


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -429,3 +467,12 @@ def get_metrics_namespace(self) -> str:
 
   def validate_inference_args(self, inference_args: Optional[Dict[str, Any]]):
     pass
+
+  def _validate_func_args(self):

Review Comment:
   Can we share this validation function across model handlers?



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -71,18 +73,26 @@ def _load_model(
   try:
     logging.info(
         "Loading state_dict_path %s onto a %s device", state_dict_path, device)
-    state_dict = torch.load(file, map_location=device)
+    if not use_torch_script_format:

Review Comment:
   Rather than parameterizing this, should we just accept None as values for model_class and make our decision based on that?



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -174,6 +186,9 @@ def __init__(
         Otherwise, it will be CPU.
       inference_fn: the inference function to use during RunInference.
         default=_default_tensor_inference_fn
+      use_torch_script_format: When `use_torch_script_format` is set to `True`,
+        the model will be loaded using `torch.jit.load()`.
+        `model_class` and `model_params` arguments will be disregarded.

Review Comment:
   State_dict_path can now hold more than just a state_dict, right? (it can hold the torch_script file format) If so, could you update that example usage parameter?



##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -609,6 +609,57 @@ def test_gpu_auto_convert_to_cpu(self):
           "are not available. Switching to CPU.",
           log.output)
 
+  def test_load_torch_script_model(self):
+    torch_model = PytorchLinearRegression(2, 1)
+    torch_script_model = torch.jit.script(torch_model)
+
+    torch_script_path = os.path.join(self.tmpdir, 'torch_script_model.pt')
+
+    torch.jit.save(torch_script_model, torch_script_path)

Review Comment:
   Does this work for remote models? If so, can we just use a remote model in gcs rather than saving a model for each test?



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -71,18 +73,26 @@ def _load_model(
   try:
     logging.info(
         "Loading state_dict_path %s onto a %s device", state_dict_path, device)
-    state_dict = torch.load(file, map_location=device)
+    if not use_torch_script_format:

Review Comment:
   My inclination after thinking more is that we should not do this since (a) it gives the users less safety from accidental errors, and (b) it is potentially constraining if we accept loading new model types that don't use state_dict_path or model_class in the future.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1098756334


##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -609,6 +609,57 @@ def test_gpu_auto_convert_to_cpu(self):
           "are not available. Switching to CPU.",
           log.output)
 
+  def test_load_torch_script_model(self):
+    torch_model = PytorchLinearRegression(2, 1)
+    torch_script_model = torch.jit.script(torch_model)
+
+    torch_script_path = os.path.join(self.tmpdir, 'torch_script_model.pt')
+
+    torch.jit.save(torch_script_model, torch_script_path)

Review Comment:
   Ok, leaving it works for me.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1103143757


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -174,6 +174,12 @@ def update_model_path(self, model_path: Optional[str] = None):
     """Update the model paths produced by side inputs."""
     pass
 
+  def validate_constructor_args(self):

Review Comment:
   Sorry for not seeing this earlier, but do we actually get anything out of this being part of the base class? We're never going to call it from the base and its implementation will always be subclass dependent



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1103154016


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -174,6 +174,12 @@ def update_model_path(self, model_path: Optional[str] = None):
     """Update the model paths produced by side inputs."""
     pass
 
+  def validate_constructor_args(self):

Review Comment:
   At first I kept this in the Pytorch model handler class but there was a suggestion at https://github.com/apache/beam/pull/25321#discussion_r1097855765. I might have understood wrong but what i thought is that, 
   
   if we want to validate other handlers constructor args, lets share this method in the base ModelHandler and other subclasses can implement if they want 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1098753827


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -71,18 +73,26 @@ def _load_model(
   try:
     logging.info(
         "Loading state_dict_path %s onto a %s device", state_dict_path, device)
-    state_dict = torch.load(file, map_location=device)
+    if not use_torch_script_format:

Review Comment:
   This comment is a no-op. I was considering the idea of not having a `use_torch_script_format` parameter and just inferring whether we should do that based on if `model_class` was set. My second comment here was saying that I don't actually think its a good idea.
   
   Mostly just wanted to call it out since it was a sticking point on my first review pass.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] github-actions[bot] commented on pull request #25321: Add support for loading torchscript models

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on PR #25321:
URL: https://github.com/apache/beam/pull/25321#issuecomment-1419398180

   Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on PR #25321:
URL: https://github.com/apache/beam/pull/25321#issuecomment-1419395925

   R: @damccorm 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] AnandInguva commented on pull request #25321: Add support for loading torchscript models

Posted by "AnandInguva (via GitHub)" <gi...@apache.org>.
AnandInguva commented on PR #25321:
URL: https://github.com/apache/beam/pull/25321#issuecomment-1419359356

   Run PythonLint PreCommit


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] damccorm commented on a diff in pull request #25321: Add support for loading torchscript models

Posted by "damccorm (via GitHub)" <gi...@apache.org>.
damccorm commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1102901526


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -81,11 +81,8 @@ def _validate_constructor_args(
 
 
 def _load_model(
-    model_class: Optional[Callable[..., torch.nn.Module]],
-    state_dict_path: Optional[str],
-    device: torch.device,
-    model_params: Optional[Dict[str, Any]],
-    torch_script_model_path: Optional[str]):
+    model_class, state_dict_path, device, model_params,
+    torch_script_model_path):

Review Comment:
   Why did we need to remove typings in the last commit? What is the specific linting issue?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org