You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by "tvalentyn (via GitHub)" <gi...@apache.org> on 2023/02/08 18:32:50 UTC

[GitHub] [beam] tvalentyn commented on a diff in pull request #25370: Support batching as config in RunInference (sklearn and pytorch)

tvalentyn commented on code in PR #25370:
URL: https://github.com/apache/beam/pull/25370#discussion_r1100531179


##########
sdks/python/apache_beam/ml/inference/sklearn_inference_test.py:
##########
@@ -338,6 +372,42 @@ def test_pipeline_pandas(self):
       assert_that(
           actual, equal_to(expected, equals_fn=_compare_dataframe_predictions))
 
+  def test_pipeline_pandas_custom_batching(self):
+    temp_file_name = self.tmpdir + os.sep + 'pickled_file'
+    with open(temp_file_name, 'wb') as file:
+      pickle.dump(build_pandas_pipeline(), file)
+
+    def batch_validator_pandas_inference_fn(
+        model: BaseEstimator,
+        batch: Sequence[numpy.ndarray],
+        inference_args: Optional[Dict[str, Any]] = None) -> Any:
+      if len(batch) != 5:
+        raise Exception(
+            'Expected batch of size 5, received batch of size {}'.format(

Review Comment:
   (personal opinion) I find f-strings easier to read, e.g. `f'Expected batch of size 5, received batch of size {len(batch)}'`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org