You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2022/06/10 19:56:08 UTC

[GitHub] [beam] yeandy opened a new pull request, #21806: Remove kwargs and add explicit runinference_args

yeandy opened a new pull request, #21806:
URL: https://github.com/apache/beam/pull/21806

   Removing `**kwargs` from `RunInference` and add an explicit `extra_runinference_args` argument. Only frameworks, like Pytorch, that need the extra params will use it. Other frameworks won't need pass in these extra `extra_runinference_args` args.
   
   ------------------------
   
   Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
   
    - [ ] [**Choose reviewer(s)**](https://beam.apache.org/contribute/#make-your-change) and mention them in a comment (`R: @username`).
    - [ ] Mention the appropriate issue in your description (for example: "addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment "fixes #<ISSUE NUMBER>" instead.
    - [ ] Update `CHANGES.md` with noteworthy changes.
    - [ ] If this contribution is large, please file an Apache [Individual Contributor License Agreement](https://www.apache.org/licenses/icla.pdf).
   
   See the [Contributor Guide](https://beam.apache.org/contribute) for more tips on [how to make review process smoother](https://beam.apache.org/contribute/#make-reviewers-job-easier).
   
   To check the build health, please visit [https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md](https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md)
   
   GitHub Actions Tests Status (on master branch)
   ------------------------------------------------------------------------------------------------
   [![Build python source distribution and wheels](https://github.com/apache/beam/workflows/Build%20python%20source%20distribution%20and%20wheels/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Build+python+source+distribution+and+wheels%22+branch%3Amaster+event%3Aschedule)
   [![Python tests](https://github.com/apache/beam/workflows/Python%20tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Python+Tests%22+branch%3Amaster+event%3Aschedule)
   [![Java tests](https://github.com/apache/beam/workflows/Java%20Tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Java+Tests%22+branch%3Amaster+event%3Aschedule)
   
   See [CI.md](https://github.com/apache/beam/blob/master/CI.md) for more information about GitHub Actions CI.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r894896889


##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -115,12 +115,12 @@ def forward(self, x):
     return out
 
 
-class PytorchLinearRegressionKwargsPredictionParams(torch.nn.Module):
+class PytorchLinearRegressionKeyedBatchAndExtraParams(torch.nn.Module):
   """
-  A linear model with kwargs inputs and non-batchable input params.
+  A linear model with batched keyed inputs and non-batchable extra params.
 
-  Note: k1 and k2 are batchable inputs passed in as a kwargs.
-  prediction_param_array, prediction_param_bool are non-batchable inputs
+  Note: k1 and k2 are batchable examples passed in as a keyed to torch dict.

Review Comment:
   Yeah, that'll work



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897091195


##########
sdks/python/apache_beam/ml/inference/sklearn_inference.py:
##########
@@ -74,8 +77,11 @@ def load_model(self) -> BaseEstimator:
     return _load_model(self._model_uri, self._model_file_type)
 
   def run_inference(
-      self, batch: Sequence[numpy.ndarray], model: BaseEstimator,
-      **kwargs) -> Iterable[PredictionResult]:
+      self,
+      batch: Sequence[numpy.ndarray],
+      model: BaseEstimator,
+      extra_kwargs: Optional[Dict[str,

Review Comment:
   > If someone puts extra args into this interface that it isn't expecting it should fail instead of work silently.
   
   A ModelHandler that doesn't support extra_kwargs could raise an error. We don't need to rely on Python argument matching to raise the error.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r894918486


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -216,8 +223,12 @@ def process(self, batch, **kwargs):
       keys = None
 
     start_time = _to_microseconds(self._clock.time_ns())
-    result_generator = self._model_handler.run_inference(
-        examples, self._model, **kwargs)
+    if extra_kwargs:
+      result_generator = self._model_handler.run_inference(
+          examples, self._model, extra_kwargs)
+    else:
+      result_generator = self._model_handler.run_inference(
+          examples, self._model)

Review Comment:
   Can't we just always pass `extra_kwargs` through?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897124510


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -217,6 +225,8 @@ class RunInference(beam.PTransform[beam.PCollection[ExampleT],
   Args:
       model_handler: An implementation of ModelHandler.
       clock: A clock implementing get_current_time_in_microseconds.
+      extra_kwargs: Extra arguments for models whose inference call requires
+        extra parameters.

Review Comment:
   Thanks. Fixed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897018804


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -217,6 +225,8 @@ class RunInference(beam.PTransform[beam.PCollection[ExampleT],
   Args:
       model_handler: An implementation of ModelHandler.
       clock: A clock implementing get_current_time_in_microseconds.
+      extra_kwargs: Extra arguments for models whose inference call requires
+        extra parameters.

Review Comment:
   I think that PythonDocs PreCommit is unhappy with the indentation here (not sure though). You should be able to reproduce the failure locally with `tox -e py38-docs` to test a fix.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit commented on pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit commented on PR #21806:
URL: https://github.com/apache/beam/pull/21806#issuecomment-1154607032

   It seems the actual error is hidden by all the dataframe cruft, sorry about that.
   
   ```
   15:21:58 /home/jenkins/jenkins-slave/workspace/beam_PreCommit_PythonDocs_Phrase/src/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/target/.tox-py38-docs/py38-docs/lib/python3.8/site-packages/apache_beam/ml/inference/base.py:docstring of apache_beam.ml.inference.base.RunInference:5: WARNING: Unexpected indentation.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] asf-ci commented on pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
asf-ci commented on PR #21806:
URL: https://github.com/apache/beam/pull/21806#issuecomment-1152711172

   Can one of the admins verify this patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r894898749


##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -115,12 +115,12 @@ def forward(self, x):
     return out
 
 
-class PytorchLinearRegressionKwargsPredictionParams(torch.nn.Module):
+class PytorchLinearRegressionKeyedBatchAndExtraParams(torch.nn.Module):
   """
-  A linear model with kwargs inputs and non-batchable input params.
+  A linear model with batched keyed inputs and non-batchable extra params.
 
-  Note: k1 and k2 are batchable inputs passed in as a kwargs.
-  prediction_param_array, prediction_param_bool are non-batchable inputs
+  Note: k1 and k2 are batchable examples passed in as a keyed to torch dict.

Review Comment:
   Sorry, I misread this comment. I'll reword. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] ryanthompson591 commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
ryanthompson591 commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897040957


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -125,11 +130,14 @@ def load_model(self) -> ModelT:
     return self._unkeyed.load_model()
 
   def run_inference(
-      self, batch: Sequence[Tuple[KeyT, ExampleT]], model: ModelT,
-      **kwargs) -> Iterable[Tuple[KeyT, PredictionT]]:
+      self,
+      batch: Sequence[Tuple[KeyT, ExampleT]],
+      model: ModelT,
+      extra_kwargs: Optional[Dict[str, Any]] = None
+  ) -> Iterable[Tuple[KeyT, PredictionT]]:
     keys, unkeyed_batch = zip(*batch)
     return zip(
-        keys, self._unkeyed.run_inference(unkeyed_batch, model, **kwargs))
+        keys, self._unkeyed.run_inference(unkeyed_batch, model, extra_kwargs))

Review Comment:
   These arguments should be passed through if they exist and not otherwise.



##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -237,10 +237,8 @@ def test_run_inference_kwargs_prediction_params(self):
     inference_runner = TestPytorchModelHandlerForInferenceOnly(
         torch.device('cpu'))
     predictions = inference_runner.run_inference(
-        batch=KWARGS_TORCH_EXAMPLES,
-        model=model,
-        prediction_params=prediction_params)
-    for actual, expected in zip(predictions, KWARGS_TORCH_PREDICTIONS):
+        batch=KEYED_TORCH_EXAMPLES, model=model, extra_kwargs=extra_kwargs)

Review Comment:
   I think we should keep these args as anonymous 



##########
sdks/python/apache_beam/ml/inference/sklearn_inference.py:
##########
@@ -74,8 +77,11 @@ def load_model(self) -> BaseEstimator:
     return _load_model(self._model_uri, self._model_file_type)
 
   def run_inference(
-      self, batch: Sequence[numpy.ndarray], model: BaseEstimator,
-      **kwargs) -> Iterable[PredictionResult]:
+      self,
+      batch: Sequence[numpy.ndarray],
+      model: BaseEstimator,
+      extra_kwargs: Optional[Dict[str,

Review Comment:
   No. Let's not put these extra args into sklearn.  The model doesn't want them, they don't need to be there.
   
   This will be extra unused parameters that don't need to be there. They don't need to be here and they don't need to be in tensorflow implementations.
   
   Don't put them here. If anything the only thing this PR should do is remove **kwargs from this interface, it shouldn't be here.
   
   If someone puts extra args into this interface that it isn't expecting it should fail instead of work silently.



##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -84,8 +86,11 @@ def load_model(self) -> ModelT:
     """Loads and initializes a model for processing."""
     raise NotImplementedError(type(self))
 
-  def run_inference(self, batch: Sequence[ExampleT], model: ModelT,
-                    **kwargs) -> Iterable[PredictionT]:
+  def run_inference(
+      self,
+      batch: Sequence[ExampleT],
+      model: ModelT,
+      extra_kwargs: Optional[Dict[str, Any]] = None) -> Iterable[PredictionT]:

Review Comment:
   this is a bad name. Something like inference_arguments should be the right name since these are not just generic kwargs but rather inference specific arguments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897100304


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -125,11 +130,14 @@ def load_model(self) -> ModelT:
     return self._unkeyed.load_model()
 
   def run_inference(
-      self, batch: Sequence[Tuple[KeyT, ExampleT]], model: ModelT,
-      **kwargs) -> Iterable[Tuple[KeyT, PredictionT]]:
+      self,
+      batch: Sequence[Tuple[KeyT, ExampleT]],
+      model: ModelT,
+      extra_kwargs: Optional[Dict[str, Any]] = None
+  ) -> Iterable[Tuple[KeyT, PredictionT]]:
     keys, unkeyed_batch = zip(*batch)
     return zip(
-        keys, self._unkeyed.run_inference(unkeyed_batch, model, **kwargs))
+        keys, self._unkeyed.run_inference(unkeyed_batch, model, extra_kwargs))

Review Comment:
   I don't mean to slow things down here, I'll defer to what you all want to do. But I just want to play devil's advocate a bit: If it's a part of the interface it should be a part of the interface. Making it into a dynamic, optional parameter just makes the carve out worse (IMO).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897127796


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -125,11 +130,14 @@ def load_model(self) -> ModelT:
     return self._unkeyed.load_model()
 
   def run_inference(
-      self, batch: Sequence[Tuple[KeyT, ExampleT]], model: ModelT,
-      **kwargs) -> Iterable[Tuple[KeyT, PredictionT]]:
+      self,
+      batch: Sequence[Tuple[KeyT, ExampleT]],
+      model: ModelT,
+      extra_kwargs: Optional[Dict[str, Any]] = None
+  ) -> Iterable[Tuple[KeyT, PredictionT]]:
     keys, unkeyed_batch = zip(*batch)
     return zip(
-        keys, self._unkeyed.run_inference(unkeyed_batch, model, **kwargs))
+        keys, self._unkeyed.run_inference(unkeyed_batch, model, extra_kwargs))

Review Comment:
   Sorry, I didn't refresh this page before posting so I didn't see the new comments. I'm leaning towards having `extra_kwargs` b/c it's part of the interface, even if it isn't used in sklearn. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] ryanthompson591 commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
ryanthompson591 commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897088019


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -125,11 +130,14 @@ def load_model(self) -> ModelT:
     return self._unkeyed.load_model()
 
   def run_inference(
-      self, batch: Sequence[Tuple[KeyT, ExampleT]], model: ModelT,
-      **kwargs) -> Iterable[Tuple[KeyT, PredictionT]]:
+      self,
+      batch: Sequence[Tuple[KeyT, ExampleT]],
+      model: ModelT,
+      extra_kwargs: Optional[Dict[str, Any]] = None
+  ) -> Iterable[Tuple[KeyT, PredictionT]]:
     keys, unkeyed_batch = zip(*batch)
     return zip(
-        keys, self._unkeyed.run_inference(unkeyed_batch, model, **kwargs))
+        keys, self._unkeyed.run_inference(unkeyed_batch, model, extra_kwargs))

Review Comment:
   I'm worried about growing the interface in a lot of places to handle arguments that are only applicable to a single framework.
   
   If an argument is shouldn't be in a framework ideally it should fail and give an error.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] ryanthompson591 commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
ryanthompson591 commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897131988


##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -237,10 +237,8 @@ def test_run_inference_kwargs_prediction_params(self):
     inference_runner = TestPytorchModelHandlerForInferenceOnly(
         torch.device('cpu'))
     predictions = inference_runner.run_inference(
-        batch=KWARGS_TORCH_EXAMPLES,
-        model=model,
-        prediction_params=prediction_params)
-    for actual, expected in zip(predictions, KWARGS_TORCH_PREDICTIONS):
+        batch=KEYED_TORCH_EXAMPLES, model=model, extra_kwargs=extra_kwargs)

Review Comment:
   ok I've been thinking about this:
   
   I was playing around with ways to make arguments specific to run_inference and I think there are only three ways. Either what you have done, anonymous args, or an if statement
   
   if inference_args:
     model_handler.run_inference(model, batch, inference_args)
   else:
     model_handler.run_inference(model, batch)
   
   I'm not sure what I prefer now that I'm looking at it.
   
   
   The if statement has the advantage of allowing clients that don't expect this argument to fail or pass without modifcations.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit commented on pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit commented on PR #21806:
URL: https://github.com/apache/beam/pull/21806#issuecomment-1154497325

   Run PythonDocs PreCommit


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r895618929


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -104,9 +110,9 @@ def __init__(
       self,
       model_handler: ModelHandler[ExampleT, PredictionT, Any],
       clock=time,
-      **kwargs):
+      extra_runinference_args: Optional[Dict[str, Any]] = None):

Review Comment:
   Changed, and added. 



##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -115,12 +115,12 @@ def forward(self, x):
     return out
 
 
-class PytorchLinearRegressionKwargsPredictionParams(torch.nn.Module):
+class PytorchLinearRegressionKeyedBatchAndExtraParams(torch.nn.Module):
   """
-  A linear model with kwargs inputs and non-batchable input params.
+  A linear model with batched keyed inputs and non-batchable extra params.
 
-  Note: k1 and k2 are batchable inputs passed in as a kwargs.
-  prediction_param_array, prediction_param_bool are non-batchable inputs
+  Note: k1 and k2 are batchable examples passed in as a keyed to torch dict.

Review Comment:
   Fixed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on PR #21806:
URL: https://github.com/apache/beam/pull/21806#issuecomment-1152687227

   R: @ryanthompson591 @TheNeuralBit  @robertwb 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] asf-ci commented on pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
asf-ci commented on PR #21806:
URL: https://github.com/apache/beam/pull/21806#issuecomment-1152711164

   Can one of the admins verify this patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897092888


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -125,11 +130,14 @@ def load_model(self) -> ModelT:
     return self._unkeyed.load_model()
 
   def run_inference(
-      self, batch: Sequence[Tuple[KeyT, ExampleT]], model: ModelT,
-      **kwargs) -> Iterable[Tuple[KeyT, PredictionT]]:
+      self,
+      batch: Sequence[Tuple[KeyT, ExampleT]],
+      model: ModelT,
+      extra_kwargs: Optional[Dict[str, Any]] = None
+  ) -> Iterable[Tuple[KeyT, PredictionT]]:
     keys, unkeyed_batch = zip(*batch)
     return zip(
-        keys, self._unkeyed.run_inference(unkeyed_batch, model, **kwargs))
+        keys, self._unkeyed.run_inference(unkeyed_batch, model, extra_kwargs))

Review Comment:
   But the fact is the interface _has_ grown, extra_kwargs is an argument on `ModelHandler.run_inference`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897068629


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -125,11 +130,14 @@ def load_model(self) -> ModelT:
     return self._unkeyed.load_model()
 
   def run_inference(
-      self, batch: Sequence[Tuple[KeyT, ExampleT]], model: ModelT,
-      **kwargs) -> Iterable[Tuple[KeyT, PredictionT]]:
+      self,
+      batch: Sequence[Tuple[KeyT, ExampleT]],
+      model: ModelT,
+      extra_kwargs: Optional[Dict[str, Any]] = None
+  ) -> Iterable[Tuple[KeyT, PredictionT]]:
     keys, unkeyed_batch = zip(*batch)
     return zip(
-        keys, self._unkeyed.run_inference(unkeyed_batch, model, **kwargs))
+        keys, self._unkeyed.run_inference(unkeyed_batch, model, extra_kwargs))

Review Comment:
   why?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897120580


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -84,8 +86,11 @@ def load_model(self) -> ModelT:
     """Loads and initializes a model for processing."""
     raise NotImplementedError(type(self))
 
-  def run_inference(self, batch: Sequence[ExampleT], model: ModelT,
-                    **kwargs) -> Iterable[PredictionT]:
+  def run_inference(
+      self,
+      batch: Sequence[ExampleT],
+      model: ModelT,
+      extra_kwargs: Optional[Dict[str, Any]] = None) -> Iterable[PredictionT]:

Review Comment:
   Changed to `inference_args` for conciseness. 



##########
sdks/python/apache_beam/ml/inference/sklearn_inference.py:
##########
@@ -74,8 +77,11 @@ def load_model(self) -> BaseEstimator:
     return _load_model(self._model_uri, self._model_file_type)
 
   def run_inference(
-      self, batch: Sequence[numpy.ndarray], model: BaseEstimator,
-      **kwargs) -> Iterable[PredictionResult]:
+      self,
+      batch: Sequence[numpy.ndarray],
+      model: BaseEstimator,
+      extra_kwargs: Optional[Dict[str,

Review Comment:
   Good point. Removed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] asf-ci commented on pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
asf-ci commented on PR #21806:
URL: https://github.com/apache/beam/pull/21806#issuecomment-1152712895

   Can one of the admins verify this patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r895620213


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -216,8 +223,12 @@ def process(self, batch, **kwargs):
       keys = None
 
     start_time = _to_microseconds(self._clock.time_ns())
-    result_generator = self._model_handler.run_inference(
-        examples, self._model, **kwargs)
+    if extra_kwargs:
+      result_generator = self._model_handler.run_inference(
+          examples, self._model, extra_kwargs)
+    else:
+      result_generator = self._model_handler.run_inference(
+          examples, self._model)

Review Comment:
   Yeah, I originally had it this way because I didn't replace `**kwargs` with `extra_kwargs` in `base` or `sklearn`, and only in `pytorch`. Changed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on PR #21806:
URL: https://github.com/apache/beam/pull/21806#issuecomment-1154221857

   PTAL @TheNeuralBit 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit commented on pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit commented on PR #21806:
URL: https://github.com/apache/beam/pull/21806#issuecomment-1155834913

   Run Python PreCommit


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] asf-ci commented on pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
asf-ci commented on PR #21806:
URL: https://github.com/apache/beam/pull/21806#issuecomment-1152711173

   Can one of the admins verify this patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit closed pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit closed pull request #21806: Remove kwargs and add explicit runinference_args
URL: https://github.com/apache/beam/pull/21806


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897138843


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -125,11 +130,14 @@ def load_model(self) -> ModelT:
     return self._unkeyed.load_model()
 
   def run_inference(
-      self, batch: Sequence[Tuple[KeyT, ExampleT]], model: ModelT,
-      **kwargs) -> Iterable[Tuple[KeyT, PredictionT]]:
+      self,
+      batch: Sequence[Tuple[KeyT, ExampleT]],
+      model: ModelT,
+      extra_kwargs: Optional[Dict[str, Any]] = None
+  ) -> Iterable[Tuple[KeyT, PredictionT]]:
     keys, unkeyed_batch = zip(*batch)
     return zip(
-        keys, self._unkeyed.run_inference(unkeyed_batch, model, **kwargs))
+        keys, self._unkeyed.run_inference(unkeyed_batch, model, extra_kwargs))

Review Comment:
   Caught up with @ryanthompson591. We're going to stick with passing (renamed) `inference_args` for all frameworks, but for sklearn, raise an exception if a non-empty value is passed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r898195468


##########
sdks/python/apache_beam/ml/inference/sklearn_inference.py:
##########
@@ -74,8 +91,12 @@ def load_model(self) -> BaseEstimator:
     return _load_model(self._model_uri, self._model_file_type)
 
   def run_inference(
-      self, batch: Sequence[numpy.ndarray], model: BaseEstimator,
-      **kwargs) -> Iterable[PredictionResult]:
+      self,
+      batch: Sequence[numpy.ndarray],
+      model: BaseEstimator,
+      inference_args: Optional[Dict[str, Any]] = None
+  ) -> Iterable[PredictionResult]:
+    _validate_inference_args(inference_args)

Review Comment:
   Yes. I've filed this issue https://github.com/apache/beam/issues/21894 as something to do later. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] tvalentyn commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
tvalentyn commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r898128282


##########
sdks/python/apache_beam/ml/inference/sklearn_inference.py:
##########
@@ -74,8 +91,12 @@ def load_model(self) -> BaseEstimator:
     return _load_model(self._model_uri, self._model_file_type)
 
   def run_inference(
-      self, batch: Sequence[numpy.ndarray], model: BaseEstimator,
-      **kwargs) -> Iterable[PredictionResult]:
+      self,
+      batch: Sequence[numpy.ndarray],
+      model: BaseEstimator,
+      inference_args: Optional[Dict[str, Any]] = None
+  ) -> Iterable[PredictionResult]:
+    _validate_inference_args(inference_args)

Review Comment:
   (not urgent). This validation will  happen at pipeline execution, right? Can it be done at construction time to reduce the feedback loop?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r894882116


##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -115,12 +115,12 @@ def forward(self, x):
     return out
 
 
-class PytorchLinearRegressionKwargsPredictionParams(torch.nn.Module):
+class PytorchLinearRegressionKeyedBatchAndExtraParams(torch.nn.Module):
   """
-  A linear model with kwargs inputs and non-batchable input params.
+  A linear model with batched keyed inputs and non-batchable extra params.
 
-  Note: k1 and k2 are batchable inputs passed in as a kwargs.
-  prediction_param_array, prediction_param_bool are non-batchable inputs
+  Note: k1 and k2 are batchable examples passed in as a keyed to torch dict.

Review Comment:
   "a keyed to torch dict"?



##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -104,9 +110,9 @@ def __init__(
       self,
       model_handler: ModelHandler[ExampleT, PredictionT, Any],
       clock=time,
-      **kwargs):
+      extra_runinference_args: Optional[Dict[str, Any]] = None):

Review Comment:
   What about `extra_kwargs`? These are still kwargs, and I don't think `runinference` adds anything.
   
   Could you also document this in the docstring?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] codecov[bot] commented on pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
codecov[bot] commented on PR #21806:
URL: https://github.com/apache/beam/pull/21806#issuecomment-1154279605

   # [Codecov](https://codecov.io/gh/apache/beam/pull/21806?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#21806](https://codecov.io/gh/apache/beam/pull/21806?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (5268030) into [master](https://codecov.io/gh/apache/beam/commit/edf9b7906cd188cc7d16737d9a779a5cf585a6a2?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (edf9b79) will **decrease** coverage by `0.01%`.
   > The diff coverage is `63.63%`.
   
   ```diff
   @@            Coverage Diff             @@
   ##           master   #21806      +/-   ##
   ==========================================
   - Coverage   74.15%   74.13%   -0.02%     
   ==========================================
     Files         698      698              
     Lines       92417    92410       -7     
   ==========================================
   - Hits        68530    68507      -23     
   - Misses      22636    22652      +16     
     Partials     1251     1251              
   ```
   
   | Flag | Coverage Δ | |
   |---|---|---|
   | python | `83.74% <63.63%> (-0.03%)` | :arrow_down: |
   
   Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
   
   | [Impacted Files](https://codecov.io/gh/apache/beam/pull/21806?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...thon/apache\_beam/ml/inference/pytorch\_inference.py](https://codecov.io/gh/apache/beam/pull/21806/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vbWwvaW5mZXJlbmNlL3B5dG9yY2hfaW5mZXJlbmNlLnB5) | `0.00% <0.00%> (ø)` | |
   | [sdks/python/apache\_beam/ml/inference/base.py](https://codecov.io/gh/apache/beam/pull/21806/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vbWwvaW5mZXJlbmNlL2Jhc2UucHk=) | `95.88% <100.00%> (+0.82%)` | :arrow_up: |
   | [...thon/apache\_beam/ml/inference/sklearn\_inference.py](https://codecov.io/gh/apache/beam/pull/21806/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vbWwvaW5mZXJlbmNlL3NrbGVhcm5faW5mZXJlbmNlLnB5) | `93.54% <100.00%> (+0.21%)` | :arrow_up: |
   | [.../python/apache\_beam/testing/test\_stream\_service.py](https://codecov.io/gh/apache/beam/pull/21806/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vdGVzdGluZy90ZXN0X3N0cmVhbV9zZXJ2aWNlLnB5) | `88.09% <0.00%> (-4.77%)` | :arrow_down: |
   | [...n/apache\_beam/ml/gcp/recommendations\_ai\_test\_it.py](https://codecov.io/gh/apache/beam/pull/21806/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vbWwvZ2NwL3JlY29tbWVuZGF0aW9uc19haV90ZXN0X2l0LnB5) | `73.46% <0.00%> (-2.05%)` | :arrow_down: |
   | [.../python/apache\_beam/transforms/periodicsequence.py](https://codecov.io/gh/apache/beam/pull/21806/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vdHJhbnNmb3Jtcy9wZXJpb2RpY3NlcXVlbmNlLnB5) | `96.72% <0.00%> (-1.64%)` | :arrow_down: |
   | [sdks/python/apache\_beam/io/source\_test\_utils.py](https://codecov.io/gh/apache/beam/pull/21806/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vaW8vc291cmNlX3Rlc3RfdXRpbHMucHk=) | `88.01% <0.00%> (-1.39%)` | :arrow_down: |
   | [...che\_beam/runners/interactive/interactive\_runner.py](https://codecov.io/gh/apache/beam/pull/21806/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vcnVubmVycy9pbnRlcmFjdGl2ZS9pbnRlcmFjdGl2ZV9ydW5uZXIucHk=) | `90.06% <0.00%> (-1.33%)` | :arrow_down: |
   | [...eam/runners/portability/fn\_api\_runner/execution.py](https://codecov.io/gh/apache/beam/pull/21806/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vcnVubmVycy9wb3J0YWJpbGl0eS9mbl9hcGlfcnVubmVyL2V4ZWN1dGlvbi5weQ==) | `92.44% <0.00%> (-0.65%)` | :arrow_down: |
   | [...hon/apache\_beam/runners/worker/bundle\_processor.py](https://codecov.io/gh/apache/beam/pull/21806/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vcnVubmVycy93b3JrZXIvYnVuZGxlX3Byb2Nlc3Nvci5weQ==) | `93.42% <0.00%> (-0.25%)` | :arrow_down: |
   | ... and [6 more](https://codecov.io/gh/apache/beam/pull/21806/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/beam/pull/21806?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/beam/pull/21806?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [edf9b79...5268030](https://codecov.io/gh/apache/beam/pull/21806?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] tvalentyn merged pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
tvalentyn merged PR #21806:
URL: https://github.com/apache/beam/pull/21806


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897119480


##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -237,10 +237,8 @@ def test_run_inference_kwargs_prediction_params(self):
     inference_runner = TestPytorchModelHandlerForInferenceOnly(
         torch.device('cpu'))
     predictions = inference_runner.run_inference(
-        batch=KWARGS_TORCH_EXAMPLES,
-        model=model,
-        prediction_params=prediction_params)
-    for actual, expected in zip(predictions, KWARGS_TORCH_PREDICTIONS):
+        batch=KEYED_TORCH_EXAMPLES, model=model, extra_kwargs=extra_kwargs)

Review Comment:
   Can you clarify by anonymous args?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] TheNeuralBit commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
TheNeuralBit commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897100304


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -125,11 +130,14 @@ def load_model(self) -> ModelT:
     return self._unkeyed.load_model()
 
   def run_inference(
-      self, batch: Sequence[Tuple[KeyT, ExampleT]], model: ModelT,
-      **kwargs) -> Iterable[Tuple[KeyT, PredictionT]]:
+      self,
+      batch: Sequence[Tuple[KeyT, ExampleT]],
+      model: ModelT,
+      extra_kwargs: Optional[Dict[str, Any]] = None
+  ) -> Iterable[Tuple[KeyT, PredictionT]]:
     keys, unkeyed_batch = zip(*batch)
     return zip(
-        keys, self._unkeyed.run_inference(unkeyed_batch, model, **kwargs))
+        keys, self._unkeyed.run_inference(unkeyed_batch, model, extra_kwargs))

Review Comment:
   I don't mean to slow things down here, I'll defer to what you all want to do. But I just want to play devil's advocate a bit. If it's a part of the interface it should be a part of the interface. Making it into a dynamic, optional parameter just makes the carve out worse (IMO).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #21806: Remove kwargs and add explicit runinference_args

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #21806:
URL: https://github.com/apache/beam/pull/21806#discussion_r897120780


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -125,11 +130,14 @@ def load_model(self) -> ModelT:
     return self._unkeyed.load_model()
 
   def run_inference(
-      self, batch: Sequence[Tuple[KeyT, ExampleT]], model: ModelT,
-      **kwargs) -> Iterable[Tuple[KeyT, PredictionT]]:
+      self,
+      batch: Sequence[Tuple[KeyT, ExampleT]],
+      model: ModelT,
+      extra_kwargs: Optional[Dict[str, Any]] = None
+  ) -> Iterable[Tuple[KeyT, PredictionT]]:
     keys, unkeyed_batch = zip(*batch)
     return zip(
-        keys, self._unkeyed.run_inference(unkeyed_batch, model, **kwargs))
+        keys, self._unkeyed.run_inference(unkeyed_batch, model, extra_kwargs))

Review Comment:
   Done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org