You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by da...@apache.org on 2022/09/09 19:47:42 UTC

[beam] 01/01: Remove section from troubleshooting about fixed dictionary issue

This is an automated email from the ASF dual-hosted git repository.

damccorm pushed a commit to branch users/damccorm/inferenceDictReturnDocs
in repository https://gitbox.apache.org/repos/asf/beam.git

commit 71b0d6cf4f30b1a2e33a662f1c5c5975d5804084
Author: Danny McCormick <da...@google.com>
AuthorDate: Fri Sep 9 15:47:34 2022 -0400

    Remove section from troubleshooting about fixed dictionary issue
---
 .../site/content/en/documentation/sdks/python-machine-learning.md | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/website/www/site/content/en/documentation/sdks/python-machine-learning.md b/website/www/site/content/en/documentation/sdks/python-machine-learning.md
index 52c77794903..e462345d862 100644
--- a/website/www/site/content/en/documentation/sdks/python-machine-learning.md
+++ b/website/www/site/content/en/documentation/sdks/python-machine-learning.md
@@ -213,14 +213,6 @@ For more information, see [`KeyedModelHander`](https://beam.apache.org/releases/
 
 If you run into problems with your pipeline or job, this section lists issues that you might encounter and provides suggestions for how to fix them.
 
-### Incorrect inferences in the PredictionResult object
-
-In some cases, the `PredictionResults` output might not include the correct predictions in the `inferences` field. This issue occurs when you use a model whose inferences return a dictionary that maps keys to predictions and other metadata. An example return type is `Dict[str, Tensor]`.
-
-The RunInference API currently expects outputs to be an `Iterable[Any]`. Example return types are `Iterable[Tensor]` or `Iterable[Dict[str, Tensor]]`. When RunInference zips the inputs with the predictions, the predictions iterate over the dictionary keys instead of the batch elements. The result is that the key name is preserved but the prediction tensors are discarded. For more information, see the [Pytorch RunInference PredictionResult is a Dict](https://github.com/apache/beam/issues/ [...]
-
-To work with the current RunInference implementation, you can create a wrapper class that overrides the `model(input)` call. In PyTorch, for example, your wrapper would override the `forward()` function and return an output with the appropriate format of `List[Dict[str, torch.Tensor]]`. For more information, see the [HuggingFace language modeling example](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/inference/pytorch_language_modeling.py#L49).
-
 ### Unable to batch tensor elements
 
 RunInference uses dynamic batching. However, the RunInference API cannot batch tensor elements of different sizes, so samples passed to the RunInferene transform must be the same dimension or length. If you provide images of different sizes or word embeddings of different lengths, the following error might occur: