You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Brian Hulette (Jira)" <ji...@apache.org> on 2022/05/06 19:36:00 UTC

[jira] [Commented] (BEAM-14337) Support **kwargs for PyTorch models.

    [ https://issues.apache.org/jira/browse/BEAM-14337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17533061#comment-17533061 ] 

Brian Hulette commented on BEAM-14337:
--------------------------------------

A couple more things I noticed looking at the bert model you linked:
 * Some parameters are not tensors, e.g. simple boolean configuration parameters
 * Some parameters are tensors that are not batched (i.e. don't have a batch_size dimension), but it seems we assume that there's always one batched dimension.

Do we need to provide a way for these parameters to be overridden (e.g. specified as a constant at construction time, and/or pulled in as a side input?)

We might also consider what to do if the user has a batch_size dimension that's not the first dimension.

> Support **kwargs for PyTorch models.
> ------------------------------------
>
>                 Key: BEAM-14337
>                 URL: https://issues.apache.org/jira/browse/BEAM-14337
>             Project: Beam
>          Issue Type: Sub-task
>          Components: sdk-py-core
>            Reporter: Anand Inguva
>            Assignee: Andy Ye
>            Priority: P2
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Some models in Pytorch instantiating from torch.nn.Module, has extra parameters in the forward function call. These extra parameters can be passed as Dict or as positional arguments. 
> Example of PyTorch models supported by Hugging Face -> [https://huggingface.co/bert-base-uncased]
> [Some torch models on Hugging face|https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py]
> Eg: [https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertModel]
> {code:java}
> inputs = {
>      input_ids: Tensor1,
>      attention_mask: Tensor2,
>      token_type_ids: Tensor3,
> } 
> model = BertModel.from_pretrained("bert-base-uncased") # which is a  
> # subclass of torch.nn.Module
> outputs = model(**inputs) # model forward method should be expecting the keys in the inputs as the positional arguments.{code}
>  
> [Transformers|https://pytorch.org/hub/huggingface_pytorch-transformers/] integrated in Pytorch is supported by Hugging Face as well. 
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)