You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xiangrui Meng (JIRA)" <ji...@apache.org> on 2019/04/20 17:18:00 UTC

[jira] [Updated] (SPARK-26412) Allow Pandas UDF to take an iterator of pd.DataFrames or Arrow batches

     [ https://issues.apache.org/jira/browse/SPARK-26412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Xiangrui Meng updated SPARK-26412:
----------------------------------
    Summary: Allow Pandas UDF to take an iterator of pd.DataFrames or Arrow batches  (was: Allow Pandas UDF to take an iterator of pd.DataFrames or Arrow batches for the entire partition)

> Allow Pandas UDF to take an iterator of pd.DataFrames or Arrow batches
> ----------------------------------------------------------------------
>
>                 Key: SPARK-26412
>                 URL: https://issues.apache.org/jira/browse/SPARK-26412
>             Project: Spark
>          Issue Type: New Feature
>          Components: PySpark
>    Affects Versions: 3.0.0
>            Reporter: Xiangrui Meng
>            Priority: Major
>
> Pandas UDF is the ideal connection between PySpark and DL model inference workload. However, user needs to load the model file first to make predictions. It is common to see models of size ~100MB or bigger. If the Pandas UDF execution is limited to batch scope, user need to repeatedly load the same model for every batch in the same python worker process, which is inefficient. I created this JIRA to discuss possible solutions.
> Essentially we need to support "start()" and "finish()" besides "apply". We can either provide those interfaces or simply provide users the iterator of batches in pd.DataFrame or Arrow table and let user code handle it.
> Another benefit is with iterator interface and asyncio from Python, it is flexible for users to implement data pipelining.
> cc: [~icexelloss] [~bryanc] [~holdenk] [~hyukjin.kwon] [~ueshin] [~smilegator]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org