You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Beam JIRA Bot (Jira)" <ji...@apache.org> on 2022/05/30 17:00:00 UTC

[jira] [Commented] (BEAM-14368) Investigate load state_dict vs loading whole model

    [ https://issues.apache.org/jira/browse/BEAM-14368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17544011#comment-17544011 ] 

Beam JIRA Bot commented on BEAM-14368:
--------------------------------------

This issue is assigned but has not received an update in 30 days so it has been labeled "stale-assigned". If you are still working on the issue, please give an update and remove the label. If you are no longer working on the issue, please unassign so someone else may work on it. In 7 days the issue will be automatically unassigned.

> Investigate load state_dict vs loading whole model
> --------------------------------------------------
>
>                 Key: BEAM-14368
>                 URL: https://issues.apache.org/jira/browse/BEAM-14368
>             Project: Beam
>          Issue Type: Sub-task
>          Components: sdk-py-core
>            Reporter: Anand Inguva
>            Assignee: Anand Inguva
>            Priority: P2
>              Labels: run-inference, stale-assigned
>          Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Loading pytorch model as whole has some issues with pickling. Investigate it with running some experiments. If the model size is too large, the current implementation of the RunInference for PyTorch would fail because of memory limits.
>  
> 1. We can pass the model class to the `load_model` of PyTorchModelLoader and load the model there. This wouldn't pickle the model object but would pickle the class and the model would be instantiated on the workers.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)