You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Erik Erlandson (JIRA)" <ji...@apache.org> on 2019/03/27 23:13:00 UTC

[jira] [Created] (SPARK-27296) User Defined Aggregating Functions (UDAFs) have a major efficiency problem

Erik Erlandson created SPARK-27296:
--------------------------------------

             Summary: User Defined Aggregating Functions (UDAFs) have a major efficiency problem
                 Key: SPARK-27296
                 URL: https://issues.apache.org/jira/browse/SPARK-27296
             Project: Spark
          Issue Type: Bug
          Components: Spark Core, SQL, Structured Streaming
    Affects Versions: 2.4.0, 2.3.3, 3.0.0
            Reporter: Erik Erlandson


Spark's UDAFs appear to be serializing and de-serializing to/from the MutableAggregationBuffer for each row.  This gist shows a small reproducing UDAF and a spark shell session:

[https://gist.github.com/erikerlandson/3c4d8c6345d1521d89e0d894a423046f]

The UDAF and its compantion UDT are designed to count the number of times that ser/de is invoked for the aggregator.  The spark shell session demonstrates that it is executing ser/de on every row of the data frame.

Note, Spark's pre-defined aggregators do not have this problem, as they are based on an internal aggregating trait that does the correct thing and only calls ser/de at points such as partition boundaries, presenting final results, etc.

This is a major problem for UDAFs, as it means that every UDAF is doing a massive amount of unnecessary work per row, including but not limited to Row object allocations. For a more realistic UDAF having its own non trivial internal structure it is obviously that much worse.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org