You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2022/04/25 22:11:48 UTC

[GitHub] [beam] TheNeuralBit commented on a diff in pull request #17460: Change return type for PytorchInferenceRunner

TheNeuralBit commented on code in PR #17460:
URL: https://github.com/apache/beam/pull/17460#discussion_r858066116


##########
sdks/python/apache_beam/ml/inference/pytorch.py:
##########
@@ -37,7 +37,7 @@ def __init__(self, device: torch.device):
     self._device = device
 
   def run_inference(self, batch: List[torch.Tensor],
-                    model: torch.nn.Module) -> Iterable[torch.Tensor]:
+                    model: torch.nn.Module) -> Iterable[PredictionResult]:
     """

Review Comment:
   If the contract is that this should produce an iterable, I wonder if we should avoid materializing the list by returning a generator?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org