You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2022/08/19 14:14:37 UTC

[GitHub] [beam] yeandy opened a new pull request, #22795: Fix gpu to cpu conversion with warning logs

yeandy opened a new pull request, #22795:
URL: https://github.com/apache/beam/pull/22795

   Addresses https://github.com/apache/beam/issues/22711 and https://github.com/apache/beam/issues/22712
   
   
   ------------------------
   
   Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
   
    - [ ] [**Choose reviewer(s)**](https://beam.apache.org/contribute/#make-your-change) and mention them in a comment (`R: @username`).
    - [ ] Mention the appropriate issue in your description (for example: `addresses #123`), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment `fixes #<ISSUE NUMBER>` instead.
    - [ ] Update `CHANGES.md` with noteworthy changes.
    - [ ] If this contribution is large, please file an Apache [Individual Contributor License Agreement](https://www.apache.org/licenses/icla.pdf).
   
   See the [Contributor Guide](https://beam.apache.org/contribute) for more tips on [how to make review process smoother](https://beam.apache.org/contribute/get-started-contributing/#make-the-reviewers-job-easier).
   
   To check the build health, please visit [https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md](https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md)
   
   GitHub Actions Tests Status (on master branch)
   ------------------------------------------------------------------------------------------------
   [![Build python source distribution and wheels](https://github.com/apache/beam/workflows/Build%20python%20source%20distribution%20and%20wheels/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Build+python+source+distribution+and+wheels%22+branch%3Amaster+event%3Aschedule)
   [![Python tests](https://github.com/apache/beam/workflows/Python%20tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Python+Tests%22+branch%3Amaster+event%3Aschedule)
   [![Java tests](https://github.com/apache/beam/workflows/Java%20Tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Java+Tests%22+branch%3Amaster+event%3Aschedule)
   [![Go tests](https://github.com/apache/beam/workflows/Go%20tests/badge.svg?branch=master&event=schedule)](https://github.com/apache/beam/actions?query=workflow%3A%22Go+tests%22+branch%3Amaster+event%3Aschedule)
   
   See [CI.md](https://github.com/apache/beam/blob/master/CI.md) for more information about GitHub Actions CI.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] tvalentyn commented on pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
tvalentyn commented on PR #22795:
URL: https://github.com/apache/beam/pull/22795#issuecomment-1232261226

   > I wouldn't know, it wouldn't be apparent to me without a comment. But I suppose it might also be discoverable through blame. I'll leave it up to you.
   
   We could mention `https://github.com/apache/beam/issues/22811` in an inline comment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on PR #22795:
URL: https://github.com/apache/beam/pull/22795#issuecomment-1233079968

   cc: @jrmccluskey @damccorm 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] tvalentyn commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
tvalentyn commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r952944246


##########
sdks/python/apache_beam/examples/snippets/transforms/elementwise/runinference_test.py:
##########
@@ -44,10 +44,10 @@
 
 def check_torch_keyed_model_handler():
   expected = '''[START torch_keyed_model_handler]
-('first_question', PredictionResult(example=tensor([105.]), inference=tensor([523.6982], grad_fn=<UnbindBackward>)))
-('second_question', PredictionResult(example=tensor([108.]), inference=tensor([538.5867], grad_fn=<UnbindBackward>)))
-('third_question', PredictionResult(example=tensor([1000.]), inference=tensor([4965.4019], grad_fn=<UnbindBackward>)))
-('fourth_question', PredictionResult(example=tensor([1013.]), inference=tensor([5029.9180], grad_fn=<UnbindBackward>)))
+('first_question', PredictionResult(example=tensor([105.]), inference=tensor([523.6982])))

Review Comment:
   do we want to round up these answers or introduce some other margin of error?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on PR #22795:
URL: https://github.com/apache/beam/pull/22795#issuecomment-1222767406

   @tvalentyn Tests pass now


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] tvalentyn commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
tvalentyn commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r952944246


##########
sdks/python/apache_beam/examples/snippets/transforms/elementwise/runinference_test.py:
##########
@@ -44,10 +44,10 @@
 
 def check_torch_keyed_model_handler():
   expected = '''[START torch_keyed_model_handler]
-('first_question', PredictionResult(example=tensor([105.]), inference=tensor([523.6982], grad_fn=<UnbindBackward>)))
-('second_question', PredictionResult(example=tensor([108.]), inference=tensor([538.5867], grad_fn=<UnbindBackward>)))
-('third_question', PredictionResult(example=tensor([1000.]), inference=tensor([4965.4019], grad_fn=<UnbindBackward>)))
-('fourth_question', PredictionResult(example=tensor([1013.]), inference=tensor([5029.9180], grad_fn=<UnbindBackward>)))
+('first_question', PredictionResult(example=tensor([105.]), inference=tensor([523.6982])))

Review Comment:
   do we want to round up these numbers or introduce some other margin of error?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r954205790


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,32 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Model handler specified a 'GPU' device, but GPUs are not available. " \
+        "Switching to CPU.")
+    device = torch.device('cpu')
+
   file = FileSystems.open(state_dict_path, 'rb')
-  model.load_state_dict(torch.load(file))
+  try:
+    logging.info(
+        "Loading state_dict_path %s onto a %s device", state_dict_path, device)
+    state_dict = torch.load(file, map_location=device)
+  except RuntimeError as e:
+    message = "Loading the model onto a GPU device failed due to an " \

Review Comment:
   Well, I can't say that the error message will always be: `Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False`. Be it will definitely stem from the `torch.load(file, map_location=device)` call. So that should provide enough context as to where the exception has occured.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r950237304


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,16 +263,17 @@ def run_inference(
     # If elements in `batch` are provided as a dictionaries from key to Tensors,
     # then iterate through the batch list, and group Tensors to the same key
     key_to_tensor_list = defaultdict(list)
-    for example in batch:
-      for key, tensor in example.items():
-        key_to_tensor_list[key].append(tensor)
-    key_to_batched_tensors = {}
-    for key in key_to_tensor_list:
-      batched_tensors = torch.stack(key_to_tensor_list[key])
-      batched_tensors = _convert_to_device(batched_tensors, self._device)
-      key_to_batched_tensors[key] = batched_tensors
-    predictions = model(**key_to_batched_tensors, **inference_args)
-    return [PredictionResult(x, y) for x, y in zip(batch, predictions)]
+    with torch.no_grad():

Review Comment:
   Necessary to prevent `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0!` warning



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] ryanthompson591 commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
ryanthompson591 commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r959000211


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,16 +266,17 @@ def run_inference(
     # If elements in `batch` are provided as a dictionaries from key to Tensors,
     # then iterate through the batch list, and group Tensors to the same key
     key_to_tensor_list = defaultdict(list)
-    for example in batch:
-      for key, tensor in example.items():
-        key_to_tensor_list[key].append(tensor)
-    key_to_batched_tensors = {}
-    for key in key_to_tensor_list:
-      batched_tensors = torch.stack(key_to_tensor_list[key])
-      batched_tensors = _convert_to_device(batched_tensors, self._device)
-      key_to_batched_tensors[key] = batched_tensors
-    predictions = model(**key_to_batched_tensors, **inference_args)
-    return [PredictionResult(x, y) for x, y in zip(batch, predictions)]
+    with torch.no_grad():

Review Comment:
   I'm just thinking, if a future developer came in and said. Hey why is torch.no_grad here? why do we need this check?
   
   I wouldn't know, it wouldn't be apparent to me without a comment. But I suppose it might also be discoverable through blame.  I'll leave it up to you.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r950474838


##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -373,6 +373,40 @@ def test_invalid_input_type(self):
         # pylint: disable=expression-not-assigned
         pcoll | RunInference(model_handler)
 
+  def test_gpu_convert_to_cpu(self):
+    with self.assertLogs() as log:
+      with TestPipeline() as pipeline:
+        examples = torch.from_numpy(
+            np.array([1, 5, 3, 10], dtype="float32").reshape(-1, 1))
+
+        state_dict = OrderedDict([('linear.weight', torch.Tensor([[2.0]])),
+                                  ('linear.bias', torch.Tensor([0.5]))])
+        path = os.path.join(self.tmpdir, 'my_state_dict_path')
+        torch.save(state_dict, path)
+
+        model_handler = PytorchModelHandlerTensor(
+            state_dict_path=path,
+            model_class=PytorchLinearRegression,
+            model_params={
+                'input_dim': 1, 'output_dim': 1
+            },
+            device='GPU')
+        # Upon initialization, device is cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+        pcoll = pipeline | 'start' >> beam.Create(examples)
+        # pylint: disable=expression-not-assigned
+        pcoll | RunInference(model_handler)
+
+        # During model loading, device converted to cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+      self.assertIn("INFO:root:Device is set to CUDA", log.output)
+      self.assertIn(

Review Comment:
   I check for the change [here](https://github.com/apache/beam/pull/22795/files/73c32fd2ec5ef5d5586bf5034c3f0c54323b8b1c#diff-dbfeab547e43f6c55fab43276a2d6fed8222381313e1af9030859ca1c235127cR394-R402). Is this good enough?  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on PR #22795:
URL: https://github.com/apache/beam/pull/22795#issuecomment-1220738479

   R: @tvalentyn @ryanthompson591 
   cc: @BjornPrime 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] tvalentyn commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
tvalentyn commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r950534882


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,30 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Specified 'GPU', but could not find device. Switching to CPU.")

Review Comment:
   We could add some details where this configuration was made. How about sth like:
   
           "Model handler specified a 'GPU' device, but GPUs are not available. Switching to CPU.")
   



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,16 +263,17 @@ def run_inference(
     # If elements in `batch` are provided as a dictionaries from key to Tensors,
     # then iterate through the batch list, and group Tensors to the same key
     key_to_tensor_list = defaultdict(list)
-    for example in batch:
-      for key, tensor in example.items():
-        key_to_tensor_list[key].append(tensor)
-    key_to_batched_tensors = {}
-    for key in key_to_tensor_list:
-      batched_tensors = torch.stack(key_to_tensor_list[key])
-      batched_tensors = _convert_to_device(batched_tensors, self._device)
-      key_to_batched_tensors[key] = batched_tensors
-    predictions = model(**key_to_batched_tensors, **inference_args)
-    return [PredictionResult(x, y) for x, y in zip(batch, predictions)]
+    with torch.no_grad():

Review Comment:
   Let's capture this information in a way that makes it easier to find for someone reading the code.
   
   Ideally, there would be a GH issue mentioning the error `Setting to CPU due to an exception...`, and you add a comment and reference the issue, and link this PR to the issue.
   
   Another option is to capture this info in a pull request description for this PR (slightly more difficult to find, but easier than trying to find  this review comment chain).
   
   



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,30 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Specified 'GPU', but could not find device. Switching to CPU.")
+    device = torch.device('cpu')
+
   file = FileSystems.open(state_dict_path, 'rb')
-  model.load_state_dict(torch.load(file))
+  try:
+    logging.info("Reading state_dict_path %s onto %s", state_dict_path, device)

Review Comment:
   ```suggestion
       logging.info("Loading state_dict_path %s onto a %s device", state_dict_path, device)
   ```



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,30 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Specified 'GPU', but could not find device. Switching to CPU.")
+    device = torch.device('cpu')
+
   file = FileSystems.open(state_dict_path, 'rb')
-  model.load_state_dict(torch.load(file))
+  try:
+    logging.info("Reading state_dict_path %s onto %s", state_dict_path, device)
+    state_dict = torch.load(file, map_location=device)
+  except RuntimeError as e:
+    message = "Setting to CPU due to an exception while deserializing" \

Review Comment:
   How about:
   
   
   Loading the model onto a GPU device failed due to an exception: 
   ...
   
   Attempting to load onto a CPU device instead.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] ryanthompson591 commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
ryanthompson591 commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r950283081


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,30 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Specified 'GPU', but could not find device. Switching to CPU.")
+    device = torch.device('cpu')
+
+  try:
+    logging.info("Reading state_dict_path %s onto %s", state_dict_path, device)
+    file = FileSystems.open(state_dict_path, 'rb')
+    state_dict = torch.load(file, map_location=device)
+  except RuntimeError as e:

Review Comment:
   can this be more specific than a RuntimeError?
   
   Also can we narrow down the scope of the try/except block?
   
   For example, are we only expecting to fail if torch.load fails?
   



##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -373,6 +373,40 @@ def test_invalid_input_type(self):
         # pylint: disable=expression-not-assigned
         pcoll | RunInference(model_handler)
 
+  def test_gpu_convert_to_cpu(self):
+    with self.assertLogs() as log:
+      with TestPipeline() as pipeline:
+        examples = torch.from_numpy(
+            np.array([1, 5, 3, 10], dtype="float32").reshape(-1, 1))
+
+        state_dict = OrderedDict([('linear.weight', torch.Tensor([[2.0]])),
+                                  ('linear.bias', torch.Tensor([0.5]))])
+        path = os.path.join(self.tmpdir, 'my_state_dict_path')
+        torch.save(state_dict, path)
+
+        model_handler = PytorchModelHandlerTensor(
+            state_dict_path=path,
+            model_class=PytorchLinearRegression,
+            model_params={
+                'input_dim': 1, 'output_dim': 1
+            },
+            device='GPU')
+        # Upon initialization, device is cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+        pcoll = pipeline | 'start' >> beam.Create(examples)
+        # pylint: disable=expression-not-assigned
+        pcoll | RunInference(model_handler)
+
+        # During model loading, device converted to cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+      self.assertIn("INFO:root:Device is set to CUDA", log.output)
+      self.assertIn(

Review Comment:
   can you add something more to the test other than just looking at logs.  Maybe make sure the device change, or make sure that it runs.



##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -373,6 +373,40 @@ def test_invalid_input_type(self):
         # pylint: disable=expression-not-assigned
         pcoll | RunInference(model_handler)
 
+  def test_gpu_convert_to_cpu(self):
+    with self.assertLogs() as log:
+      with TestPipeline() as pipeline:
+        examples = torch.from_numpy(
+            np.array([1, 5, 3, 10], dtype="float32").reshape(-1, 1))
+
+        state_dict = OrderedDict([('linear.weight', torch.Tensor([[2.0]])),
+                                  ('linear.bias', torch.Tensor([0.5]))])
+        path = os.path.join(self.tmpdir, 'my_state_dict_path')
+        torch.save(state_dict, path)
+
+        model_handler = PytorchModelHandlerTensor(
+            state_dict_path=path,
+            model_class=PytorchLinearRegression,
+            model_params={
+                'input_dim': 1, 'output_dim': 1
+            },
+            device='GPU')
+        # Upon initialization, device is cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+        pcoll = pipeline | 'start' >> beam.Create(examples)
+        # pylint: disable=expression-not-assigned
+        pcoll | RunInference(model_handler)
+
+        # During model loading, device converted to cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+      self.assertIn("INFO:root:Device is set to CUDA", log.output)
+      self.assertIn(
+          "WARNING:root:Specified 'GPU', but could not find device. " \
+          "Switching to CPU.",
+          log.output)
+

Review Comment:
   Does it make sense to add a unit test where the model also fails even after trying to fall back to running on the CPU?



##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -373,6 +373,40 @@ def test_invalid_input_type(self):
         # pylint: disable=expression-not-assigned
         pcoll | RunInference(model_handler)
 
+  def test_gpu_convert_to_cpu(self):

Review Comment:
   why does this test fail over and work as CPU? Is it because the model is set up to be a CPU model, or is it because we are sure we are running the unit test on a device where the GPU fails?
   
   Can you add some comments for that.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r955420660


##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -373,6 +373,40 @@ def test_invalid_input_type(self):
         # pylint: disable=expression-not-assigned
         pcoll | RunInference(model_handler)
 
+  def test_gpu_convert_to_cpu(self):
+    with self.assertLogs() as log:
+      with TestPipeline() as pipeline:
+        examples = torch.from_numpy(
+            np.array([1, 5, 3, 10], dtype="float32").reshape(-1, 1))
+
+        state_dict = OrderedDict([('linear.weight', torch.Tensor([[2.0]])),
+                                  ('linear.bias', torch.Tensor([0.5]))])
+        path = os.path.join(self.tmpdir, 'my_state_dict_path')
+        torch.save(state_dict, path)
+
+        model_handler = PytorchModelHandlerTensor(
+            state_dict_path=path,
+            model_class=PytorchLinearRegression,
+            model_params={
+                'input_dim': 1, 'output_dim': 1
+            },
+            device='GPU')
+        # Upon initialization, device is cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+        pcoll = pipeline | 'start' >> beam.Create(examples)
+        # pylint: disable=expression-not-assigned
+        pcoll | RunInference(model_handler)
+
+        # During model loading, device converted to cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+      self.assertIn("INFO:root:Device is set to CUDA", log.output)
+      self.assertIn(
+          "WARNING:root:Specified 'GPU', but could not find device. " \
+          "Switching to CPU.",
+          log.output)
+

Review Comment:
   Addressed per your suggested change.



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,32 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Model handler specified a 'GPU' device, but GPUs are not available. " \
+        "Switching to CPU.")
+    device = torch.device('cpu')
+
   file = FileSystems.open(state_dict_path, 'rb')
-  model.load_state_dict(torch.load(file))
+  try:
+    logging.info(
+        "Loading state_dict_path %s onto a %s device", state_dict_path, device)
+    state_dict = torch.load(file, map_location=device)
+  except RuntimeError as e:
+    message = "Loading the model onto a GPU device failed due to an " \
+      f"exception:\n{e}\nAttempting to load onto a CPU device instead."
+    logging.warning(message)
+
+    device = torch.device('cpu')

Review Comment:
   Addressed per your suggested change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r959532837


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,16 +266,17 @@ def run_inference(
     # If elements in `batch` are provided as a dictionaries from key to Tensors,
     # then iterate through the batch list, and group Tensors to the same key
     key_to_tensor_list = defaultdict(list)
-    for example in batch:
-      for key, tensor in example.items():
-        key_to_tensor_list[key].append(tensor)
-    key_to_batched_tensors = {}
-    for key in key_to_tensor_list:
-      batched_tensors = torch.stack(key_to_tensor_list[key])
-      batched_tensors = _convert_to_device(batched_tensors, self._device)
-      key_to_batched_tensors[key] = batched_tensors
-    predictions = model(**key_to_batched_tensors, **inference_args)
-    return [PredictionResult(x, y) for x, y in zip(batch, predictions)]
+    with torch.no_grad():

Review Comment:
   Makes sense. Added a comment



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] ryanthompson591 commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
ryanthompson591 commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r956387116


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -188,20 +216,24 @@ def __init__(
         Otherwise, it will be CPU.
     """
     self._state_dict_path = state_dict_path
-    if device == 'GPU' and torch.cuda.is_available():
+    if device == 'GPU':
+      logging.info("Device is set to CUDA")

Review Comment:
   I wonder if some of this logging is getting to be a little too much.  Were you using some of it to debug?
   
   For example you have "device is set to" xxx in a couple spots. Is that still important logging to keep?



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,16 +266,17 @@ def run_inference(
     # If elements in `batch` are provided as a dictionaries from key to Tensors,
     # then iterate through the batch list, and group Tensors to the same key
     key_to_tensor_list = defaultdict(list)
-    for example in batch:
-      for key, tensor in example.items():
-        key_to_tensor_list[key].append(tensor)
-    key_to_batched_tensors = {}
-    for key in key_to_tensor_list:
-      batched_tensors = torch.stack(key_to_tensor_list[key])
-      batched_tensors = _convert_to_device(batched_tensors, self._device)
-      key_to_batched_tensors[key] = batched_tensors
-    predictions = model(**key_to_batched_tensors, **inference_args)
-    return [PredictionResult(x, y) for x, y in zip(batch, predictions)]
+    with torch.no_grad():

Review Comment:
   since its not apparent why this is here. Consider linking the issue.
   



##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -373,6 +373,40 @@ def test_invalid_input_type(self):
         # pylint: disable=expression-not-assigned
         pcoll | RunInference(model_handler)
 
+  def test_gpu_convert_to_cpu(self):
+    with self.assertLogs() as log:
+      with TestPipeline() as pipeline:
+        examples = torch.from_numpy(
+            np.array([1, 5, 3, 10], dtype="float32").reshape(-1, 1))
+
+        state_dict = OrderedDict([('linear.weight', torch.Tensor([[2.0]])),
+                                  ('linear.bias', torch.Tensor([0.5]))])
+        path = os.path.join(self.tmpdir, 'my_state_dict_path')
+        torch.save(state_dict, path)
+
+        model_handler = PytorchModelHandlerTensor(
+            state_dict_path=path,
+            model_class=PytorchLinearRegression,
+            model_params={
+                'input_dim': 1, 'output_dim': 1
+            },
+            device='GPU')
+        # Upon initialization, device is cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+        pcoll = pipeline | 'start' >> beam.Create(examples)
+        # pylint: disable=expression-not-assigned
+        pcoll | RunInference(model_handler)
+
+        # During model loading, device converted to cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+      self.assertIn("INFO:root:Device is set to CUDA", log.output)
+      self.assertIn(

Review Comment:
   sure seems fine.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] tvalentyn merged pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
tvalentyn merged PR #22795:
URL: https://github.com/apache/beam/pull/22795


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r953009585


##########
sdks/python/apache_beam/examples/snippets/transforms/elementwise/runinference_test.py:
##########
@@ -44,10 +44,10 @@
 
 def check_torch_keyed_model_handler():
   expected = '''[START torch_keyed_model_handler]
-('first_question', PredictionResult(example=tensor([105.]), inference=tensor([523.6982], grad_fn=<UnbindBackward>)))
-('second_question', PredictionResult(example=tensor([108.]), inference=tensor([538.5867], grad_fn=<UnbindBackward>)))
-('third_question', PredictionResult(example=tensor([1000.]), inference=tensor([4965.4019], grad_fn=<UnbindBackward>)))
-('fourth_question', PredictionResult(example=tensor([1013.]), inference=tensor([5029.9180], grad_fn=<UnbindBackward>)))
+('first_question', PredictionResult(example=tensor([105.]), inference=tensor([523.6982])))

Review Comment:
   PyTorch's default sig fig [level](https://pytorch.org/docs/stable/generated/torch.set_printoptions.html) is 4. I think given the weights of the model that we've saved to GCS, this should be fine.
   
   Also, here, we're asserting that the std out (strings) match, which may make it harder to do asserts using an error bound that would typically easily be done with floats.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r950237304


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,16 +263,17 @@ def run_inference(
     # If elements in `batch` are provided as a dictionaries from key to Tensors,
     # then iterate through the batch list, and group Tensors to the same key
     key_to_tensor_list = defaultdict(list)
-    for example in batch:
-      for key, tensor in example.items():
-        key_to_tensor_list[key].append(tensor)
-    key_to_batched_tensors = {}
-    for key in key_to_tensor_list:
-      batched_tensors = torch.stack(key_to_tensor_list[key])
-      batched_tensors = _convert_to_device(batched_tensors, self._device)
-      key_to_batched_tensors[key] = batched_tensors
-    predictions = model(**key_to_batched_tensors, **inference_args)
-    return [PredictionResult(x, y) for x, y in zip(batch, predictions)]
+    with torch.no_grad():

Review Comment:
   Necessary to prevent 
   ```Setting to CPU due to an exception while deserializing state_dict_path. Exception: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 39.59 GiB total capacity; 666.60 MiB already allocated; 13.19 MiB free; 682.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.
   ```
   error



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,16 +263,17 @@ def run_inference(
     # If elements in `batch` are provided as a dictionaries from key to Tensors,
     # then iterate through the batch list, and group Tensors to the same key
     key_to_tensor_list = defaultdict(list)
-    for example in batch:
-      for key, tensor in example.items():
-        key_to_tensor_list[key].append(tensor)
-    key_to_batched_tensors = {}
-    for key in key_to_tensor_list:
-      batched_tensors = torch.stack(key_to_tensor_list[key])
-      batched_tensors = _convert_to_device(batched_tensors, self._device)
-      key_to_batched_tensors[key] = batched_tensors
-    predictions = model(**key_to_batched_tensors, **inference_args)
-    return [PredictionResult(x, y) for x, y in zip(batch, predictions)]
+    with torch.no_grad():

Review Comment:
   Necessary to prevent this error
   ```
   Setting to CPU due to an exception while deserializing state_dict_path. Exception: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 39.59 GiB total capacity; 666.60 MiB already allocated; 13.19 MiB free; 682.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r950237304


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,16 +263,17 @@ def run_inference(
     # If elements in `batch` are provided as a dictionaries from key to Tensors,
     # then iterate through the batch list, and group Tensors to the same key
     key_to_tensor_list = defaultdict(list)
-    for example in batch:
-      for key, tensor in example.items():
-        key_to_tensor_list[key].append(tensor)
-    key_to_batched_tensors = {}
-    for key in key_to_tensor_list:
-      batched_tensors = torch.stack(key_to_tensor_list[key])
-      batched_tensors = _convert_to_device(batched_tensors, self._device)
-      key_to_batched_tensors[key] = batched_tensors
-    predictions = model(**key_to_batched_tensors, **inference_args)
-    return [PredictionResult(x, y) for x, y in zip(batch, predictions)]
+    with torch.no_grad():

Review Comment:
   Necessary to prevent `Setting to CPU due to an exception while deserializing state_dict_path. Exception: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 39.59 GiB total capacity; 666.60 MiB already allocated; 13.19 MiB free; 682.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.` warning



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on PR #22795:
URL: https://github.com/apache/beam/pull/22795#issuecomment-1222768286

   Run Python 3.9 PostCommit


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r954208736


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,32 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Model handler specified a 'GPU' device, but GPUs are not available. " \
+        "Switching to CPU.")
+    device = torch.device('cpu')
+
   file = FileSystems.open(state_dict_path, 'rb')
-  model.load_state_dict(torch.load(file))
+  try:
+    logging.info(
+        "Loading state_dict_path %s onto a %s device", state_dict_path, device)
+    state_dict = torch.load(file, map_location=device)
+  except RuntimeError as e:
+    message = "Loading the model onto a GPU device failed due to an " \
+      f"exception:\n{e}\nAttempting to load onto a CPU device instead."
+    logging.warning(message)
+
+    device = torch.device('cpu')

Review Comment:
   Thanks for bringing up your concern; I think the scenario you posed is a possibility (I just don't know of one of the top of my head), so we can incorporate this change to be safe.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] ryanthompson591 commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
ryanthompson591 commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r953814223


##########
sdks/python/apache_beam/examples/snippets/transforms/elementwise/runinference_test.py:
##########
@@ -44,10 +44,10 @@
 
 def check_torch_keyed_model_handler():
   expected = '''[START torch_keyed_model_handler]
-('first_question', PredictionResult(example=tensor([105.]), inference=tensor([523.6982], grad_fn=<UnbindBackward>)))
-('second_question', PredictionResult(example=tensor([108.]), inference=tensor([538.5867], grad_fn=<UnbindBackward>)))
-('third_question', PredictionResult(example=tensor([1000.]), inference=tensor([4965.4019], grad_fn=<UnbindBackward>)))
-('fourth_question', PredictionResult(example=tensor([1013.]), inference=tensor([5029.9180], grad_fn=<UnbindBackward>)))
+('first_question', PredictionResult(example=tensor([105.]), inference=tensor([523.6982])))

Review Comment:
   I found I needed a margin of error for an sklearn example. I just extracted the numbers with a regex.
   
   See:
   https://github.com/apache/beam/blob/3f5ddbcf9fece6bd9905bf67adcbda9aec6f29e9/sdks/python/apache_beam/ml/inference/sklearn_inference_it_test.py#L110
   
   If you find you don't need it, it's ok to be precise for now.



##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -373,6 +373,40 @@ def test_invalid_input_type(self):
         # pylint: disable=expression-not-assigned
         pcoll | RunInference(model_handler)
 
+  def test_gpu_convert_to_cpu(self):
+    with self.assertLogs() as log:
+      with TestPipeline() as pipeline:
+        examples = torch.from_numpy(
+            np.array([1, 5, 3, 10], dtype="float32").reshape(-1, 1))
+
+        state_dict = OrderedDict([('linear.weight', torch.Tensor([[2.0]])),
+                                  ('linear.bias', torch.Tensor([0.5]))])
+        path = os.path.join(self.tmpdir, 'my_state_dict_path')
+        torch.save(state_dict, path)
+
+        model_handler = PytorchModelHandlerTensor(
+            state_dict_path=path,
+            model_class=PytorchLinearRegression,
+            model_params={
+                'input_dim': 1, 'output_dim': 1
+            },
+            device='GPU')
+        # Upon initialization, device is cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+        pcoll = pipeline | 'start' >> beam.Create(examples)
+        # pylint: disable=expression-not-assigned
+        pcoll | RunInference(model_handler)
+
+        # During model loading, device converted to cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+      self.assertIn("INFO:root:Device is set to CUDA", log.output)
+      self.assertIn(
+          "WARNING:root:Specified 'GPU', but could not find device. " \
+          "Switching to CPU.",
+          log.output)
+

Review Comment:
   I mean, is it impossible for there to be any runtime errors if you load with device type cpu?



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,32 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Model handler specified a 'GPU' device, but GPUs are not available. " \
+        "Switching to CPU.")
+    device = torch.device('cpu')
+
   file = FileSystems.open(state_dict_path, 'rb')
-  model.load_state_dict(torch.load(file))
+  try:
+    logging.info(
+        "Loading state_dict_path %s onto a %s device", state_dict_path, device)
+    state_dict = torch.load(file, map_location=device)
+  except RuntimeError as e:
+    message = "Loading the model onto a GPU device failed due to an " \

Review Comment:
   Are we certain that if there is a runtime error this is the reason?  Could this message ever be a red herring?



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,32 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Model handler specified a 'GPU' device, but GPUs are not available. " \
+        "Switching to CPU.")
+    device = torch.device('cpu')
+
   file = FileSystems.open(state_dict_path, 'rb')
-  model.load_state_dict(torch.load(file))
+  try:
+    logging.info(
+        "Loading state_dict_path %s onto a %s device", state_dict_path, device)
+    state_dict = torch.load(file, map_location=device)
+  except RuntimeError as e:
+    message = "Loading the model onto a GPU device failed due to an " \
+      f"exception:\n{e}\nAttempting to load onto a CPU device instead."
+    logging.warning(message)
+
+    device = torch.device('cpu')

Review Comment:
   I'm a little worried about this logic. Is it possible that we've already switched to attempting to use the cpu above and we get a runtime error, and then attempt to use the cpu again?
   
   You could just call load_model again with the new device type.  Something like.
   ```
   if device == torch.device('cuda') and not torch.cuda.is_available():
     logging.warning('...')
     return _load_model(model_class, state_dict_path, 'CPU',  **model_params)
   
   try:
     state_dict = torch.load(file, map_location=device)
   catch RunTimeException e:
     if device == torch.device('cuda'):
       logging.warning(...)
       return _load_model(model_class, state_dict_path, 'CPU',  **model_params)
     else:
       raise e
   
   ...
   ```
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r951447323


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,30 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Specified 'GPU', but could not find device. Switching to CPU.")
+    device = torch.device('cpu')
+
   file = FileSystems.open(state_dict_path, 'rb')
-  model.load_state_dict(torch.load(file))
+  try:
+    logging.info("Reading state_dict_path %s onto %s", state_dict_path, device)
+    state_dict = torch.load(file, map_location=device)
+  except RuntimeError as e:
+    message = "Setting to CPU due to an exception while deserializing" \

Review Comment:
   Done.



##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,30 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Specified 'GPU', but could not find device. Switching to CPU.")
+    device = torch.device('cpu')
+
   file = FileSystems.open(state_dict_path, 'rb')
-  model.load_state_dict(torch.load(file))
+  try:
+    logging.info("Reading state_dict_path %s onto %s", state_dict_path, device)

Review Comment:
   Done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r951447516


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,30 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Specified 'GPU', but could not find device. Switching to CPU.")

Review Comment:
   Done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r950478717


##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -373,6 +373,40 @@ def test_invalid_input_type(self):
         # pylint: disable=expression-not-assigned
         pcoll | RunInference(model_handler)
 
+  def test_gpu_convert_to_cpu(self):

Review Comment:
   The user specifies GPU using the `device='GPU'` arg. However, it detects that we're running the test on a machine without GPU, so it does automatic conversion to CPU.
   ```
           model_handler = PytorchModelHandlerTensor(
               state_dict_path=path,
               model_class=PytorchLinearRegression,
               model_params={
                   'input_dim': 1, 'output_dim': 1
               },
               device='GPU')
   ``` 
   Added clarifying comments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r950466702


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,30 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Specified 'GPU', but could not find device. Switching to CPU.")
+    device = torch.device('cpu')
+
+  try:
+    logging.info("Reading state_dict_path %s onto %s", state_dict_path, device)
+    file = FileSystems.open(state_dict_path, 'rb')
+    state_dict = torch.load(file, map_location=device)
+  except RuntimeError as e:

Review Comment:
   The type of the exception so we can't go more specific than that.
   ```
   RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
   ```
   Yes, I can change to just have the `torch.load()` line in the block.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] tvalentyn commented on pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
tvalentyn commented on PR #22795:
URL: https://github.com/apache/beam/pull/22795#issuecomment-1232260235

   > I wouldn't know, it wouldn't be apparent to me without a comment. But I suppose it might also be discoverable through blame. I'll leave it up to you.
   
   We could add an in-line comment mentioning https://github.com/apache/beam/issues/22811


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] github-actions[bot] commented on pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on PR #22795:
URL: https://github.com/apache/beam/pull/22795#issuecomment-1220739861

   Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r950484987


##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -373,6 +373,40 @@ def test_invalid_input_type(self):
         # pylint: disable=expression-not-assigned
         pcoll | RunInference(model_handler)
 
+  def test_gpu_convert_to_cpu(self):
+    with self.assertLogs() as log:
+      with TestPipeline() as pipeline:
+        examples = torch.from_numpy(
+            np.array([1, 5, 3, 10], dtype="float32").reshape(-1, 1))
+
+        state_dict = OrderedDict([('linear.weight', torch.Tensor([[2.0]])),
+                                  ('linear.bias', torch.Tensor([0.5]))])
+        path = os.path.join(self.tmpdir, 'my_state_dict_path')
+        torch.save(state_dict, path)
+
+        model_handler = PytorchModelHandlerTensor(
+            state_dict_path=path,
+            model_class=PytorchLinearRegression,
+            model_params={
+                'input_dim': 1, 'output_dim': 1
+            },
+            device='GPU')
+        # Upon initialization, device is cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+        pcoll = pipeline | 'start' >> beam.Create(examples)
+        # pylint: disable=expression-not-assigned
+        pcoll | RunInference(model_handler)
+
+        # During model loading, device converted to cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+      self.assertIn("INFO:root:Device is set to CUDA", log.output)
+      self.assertIn(
+          "WARNING:root:Specified 'GPU', but could not find device. " \
+          "Switching to CPU.",
+          log.output)
+

Review Comment:
   I don't think there's any other scenario which would fail again in the `Except` block. 



##########
sdks/python/apache_beam/ml/inference/pytorch_inference_test.py:
##########
@@ -373,6 +373,40 @@ def test_invalid_input_type(self):
         # pylint: disable=expression-not-assigned
         pcoll | RunInference(model_handler)
 
+  def test_gpu_convert_to_cpu(self):
+    with self.assertLogs() as log:
+      with TestPipeline() as pipeline:
+        examples = torch.from_numpy(
+            np.array([1, 5, 3, 10], dtype="float32").reshape(-1, 1))
+
+        state_dict = OrderedDict([('linear.weight', torch.Tensor([[2.0]])),
+                                  ('linear.bias', torch.Tensor([0.5]))])
+        path = os.path.join(self.tmpdir, 'my_state_dict_path')
+        torch.save(state_dict, path)
+
+        model_handler = PytorchModelHandlerTensor(
+            state_dict_path=path,
+            model_class=PytorchLinearRegression,
+            model_params={
+                'input_dim': 1, 'output_dim': 1
+            },
+            device='GPU')
+        # Upon initialization, device is cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+        pcoll = pipeline | 'start' >> beam.Create(examples)
+        # pylint: disable=expression-not-assigned
+        pcoll | RunInference(model_handler)
+
+        # During model loading, device converted to cuda
+        self.assertEqual(model_handler._device, torch.device('cuda'))
+
+      self.assertIn("INFO:root:Device is set to CUDA", log.output)
+      self.assertIn(
+          "WARNING:root:Specified 'GPU', but could not find device. " \
+          "Switching to CPU.",
+          log.output)
+

Review Comment:
   I don't think there's any other scenario which would fail again in the `except` block. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r956417937


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -188,20 +216,24 @@ def __init__(
         Otherwise, it will be CPU.
     """
     self._state_dict_path = state_dict_path
-    if device == 'GPU' and torch.cuda.is_available():
+    if device == 'GPU':
+      logging.info("Device is set to CUDA")

Review Comment:
   I wanted to be super explicit about the detection of accelerators so users wouldn't have to second guess the type of environment they were operating in. (due to the nature of having multiple workers in DataFlow, I can imagine there would be future scenarios in which we have some GPU inconsistencies across the workers)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r951444429


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,16 +263,17 @@ def run_inference(
     # If elements in `batch` are provided as a dictionaries from key to Tensors,
     # then iterate through the batch list, and group Tensors to the same key
     key_to_tensor_list = defaultdict(list)
-    for example in batch:
-      for key, tensor in example.items():
-        key_to_tensor_list[key].append(tensor)
-    key_to_batched_tensors = {}
-    for key in key_to_tensor_list:
-      batched_tensors = torch.stack(key_to_tensor_list[key])
-      batched_tensors = _convert_to_device(batched_tensors, self._device)
-      key_to_batched_tensors[key] = batched_tensors
-    predictions = model(**key_to_batched_tensors, **inference_args)
-    return [PredictionResult(x, y) for x, y in zip(batch, predictions)]
+    with torch.no_grad():

Review Comment:
   Added a new bug, and it has been referenced in this PR's description.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on PR #22795:
URL: https://github.com/apache/beam/pull/22795#issuecomment-1222526383

   I think I know why. Will make a change


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r956420451


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,16 +266,17 @@ def run_inference(
     # If elements in `batch` are provided as a dictionaries from key to Tensors,
     # then iterate through the batch list, and group Tensors to the same key
     key_to_tensor_list = defaultdict(list)
-    for example in batch:
-      for key, tensor in example.items():
-        key_to_tensor_list[key].append(tensor)
-    key_to_batched_tensors = {}
-    for key in key_to_tensor_list:
-      batched_tensors = torch.stack(key_to_tensor_list[key])
-      batched_tensors = _convert_to_device(batched_tensors, self._device)
-      key_to_batched_tensors[key] = batched_tensors
-    predictions = model(**key_to_batched_tensors, **inference_args)
-    return [PredictionResult(x, y) for x, y in zip(batch, predictions)]
+    with torch.no_grad():

Review Comment:
   https://github.com/apache/beam/issues/22811
   
   I don't think we should link the url `https://github.com/apache/beam/issues/22811` explicitly in the comments since it would technically be resolved and simply add confusion. 
   
   Unless you think it would be better to just incorporate this particular change in a new PR? Wouldn't be hard to do



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on PR #22795:
URL: https://github.com/apache/beam/pull/22795#issuecomment-1232885331

   Added `https://github.com/apache/beam/issues/22811` in some comments


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] tvalentyn commented on pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
tvalentyn commented on PR #22795:
URL: https://github.com/apache/beam/pull/22795#issuecomment-1222510548

   looks like tests are failing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] yeandy commented on a diff in pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r955420180


##########
sdks/python/apache_beam/examples/snippets/transforms/elementwise/runinference_test.py:
##########
@@ -44,10 +44,10 @@
 
 def check_torch_keyed_model_handler():
   expected = '''[START torch_keyed_model_handler]
-('first_question', PredictionResult(example=tensor([105.]), inference=tensor([523.6982], grad_fn=<UnbindBackward>)))
-('second_question', PredictionResult(example=tensor([108.]), inference=tensor([538.5867], grad_fn=<UnbindBackward>)))
-('third_question', PredictionResult(example=tensor([1000.]), inference=tensor([4965.4019], grad_fn=<UnbindBackward>)))
-('fourth_question', PredictionResult(example=tensor([1013.]), inference=tensor([5029.9180], grad_fn=<UnbindBackward>)))
+('first_question', PredictionResult(example=tensor([105.]), inference=tensor([523.6982])))

Review Comment:
   I think it's ok for now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] codecov[bot] commented on pull request #22795: Fix gpu to cpu conversion with warning logs

Posted by GitBox <gi...@apache.org>.
codecov[bot] commented on PR #22795:
URL: https://github.com/apache/beam/pull/22795#issuecomment-1220755955

   # [Codecov](https://codecov.io/gh/apache/beam/pull/22795?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#22795](https://codecov.io/gh/apache/beam/pull/22795?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (73c32fd) into [master](https://codecov.io/gh/apache/beam/commit/b5b93d161200ee07b85fc6aff53b1457a47d0382?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b5b93d1) will **decrease** coverage by `0.01%`.
   > The diff coverage is `0.00%`.
   
   ```diff
   @@            Coverage Diff             @@
   ##           master   #22795      +/-   ##
   ==========================================
   - Coverage   74.11%   74.09%   -0.02%     
   ==========================================
     Files         712      712              
     Lines       93793    93817      +24     
   ==========================================
     Hits        69513    69513              
   - Misses      23000    23024      +24     
     Partials     1280     1280              
   ```
   
   | Flag | Coverage Δ | |
   |---|---|---|
   | python | `83.52% <0.00%> (-0.04%)` | :arrow_down: |
   
   Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
   
   | [Impacted Files](https://codecov.io/gh/apache/beam/pull/22795?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [...thon/apache\_beam/ml/inference/pytorch\_inference.py](https://codecov.io/gh/apache/beam/pull/22795/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vbWwvaW5mZXJlbmNlL3B5dG9yY2hfaW5mZXJlbmNlLnB5) | `0.00% <0.00%> (ø)` | |
   | [...eam/runners/portability/fn\_api\_runner/execution.py](https://codecov.io/gh/apache/beam/pull/22795/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vcnVubmVycy9wb3J0YWJpbGl0eS9mbl9hcGlfcnVubmVyL2V4ZWN1dGlvbi5weQ==) | `92.44% <0.00%> (-0.65%)` | :arrow_down: |
   | [sdks/python/apache\_beam/runners/direct/executor.py](https://codecov.io/gh/apache/beam/pull/22795/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vcnVubmVycy9kaXJlY3QvZXhlY3V0b3IucHk=) | `96.46% <0.00%> (-0.55%)` | :arrow_down: |
   | [...ks/python/apache\_beam/runners/worker/sdk\_worker.py](https://codecov.io/gh/apache/beam/pull/22795/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vcnVubmVycy93b3JrZXIvc2RrX3dvcmtlci5weQ==) | `88.94% <0.00%> (-0.16%)` | :arrow_down: |
   | [sdks/python/apache\_beam/transforms/util.py](https://codecov.io/gh/apache/beam/pull/22795/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vdHJhbnNmb3Jtcy91dGlsLnB5) | `96.06% <0.00%> (-0.16%)` | :arrow_down: |
   | [...hon/apache\_beam/runners/worker/bundle\_processor.py](https://codecov.io/gh/apache/beam/pull/22795/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vcnVubmVycy93b3JrZXIvYnVuZGxlX3Byb2Nlc3Nvci5weQ==) | `93.54% <0.00%> (+0.24%)` | :arrow_up: |
   | [sdks/python/apache\_beam/transforms/combiners.py](https://codecov.io/gh/apache/beam/pull/22795/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vdHJhbnNmb3Jtcy9jb21iaW5lcnMucHk=) | `93.43% <0.00%> (+0.38%)` | :arrow_up: |
   | [...che\_beam/runners/interactive/interactive\_runner.py](https://codecov.io/gh/apache/beam/pull/22795/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vcnVubmVycy9pbnRlcmFjdGl2ZS9pbnRlcmFjdGl2ZV9ydW5uZXIucHk=) | `91.39% <0.00%> (+1.32%)` | :arrow_up: |
   | [.../python/apache\_beam/testing/test\_stream\_service.py](https://codecov.io/gh/apache/beam/pull/22795/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c2Rrcy9weXRob24vYXBhY2hlX2JlYW0vdGVzdGluZy90ZXN0X3N0cmVhbV9zZXJ2aWNlLnB5) | `92.85% <0.00%> (+4.76%)` | :arrow_up: |
   
   :mega: We’re building smart automated test selection to slash your CI/CD build times. [Learn more](https://about.codecov.io/iterative-testing/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org