You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Herman van Hovell (JIRA)" <ji...@apache.org> on 2015/07/17 00:31:04 UTC

[jira] [Comment Edited] (SPARK-8682) Range Join for Spark SQL

    [ https://issues.apache.org/jira/browse/SPARK-8682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14630449#comment-14630449 ] 

Herman van Hovell edited comment on SPARK-8682 at 7/16/15 10:31 PM:
--------------------------------------------------------------------

I have attached some performance testing code.

In this setup RangeJoin is 13-50 times faster than the Cartesian/Filter combination. However the performance profile is a bit unexpected. The fewer records in the broadcasted, side the faster it is. This is opposite to my expectations, because RangeJoin should have a bigger advantage when the number of broadcasted rows are larger. I am looking into this.


was (Author: hvanhovell):
Some Performance Testing code.

> Range Join for Spark SQL
> ------------------------
>
>                 Key: SPARK-8682
>                 URL: https://issues.apache.org/jira/browse/SPARK-8682
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Herman van Hovell
>         Attachments: perf_testing.scala
>
>
> Currently Spark SQL uses a Broadcast Nested Loop join (or a filtered Cartesian Join) when it has to execute the following range query:
> {noformat}
> SELECT A.*,
>        B.*
> FROM   tableA A
>        JOIN tableB B
>         ON A.start <= B.end
>          AND A.end > B.start
> {noformat}
> This is horribly inefficient. The performance of this query can be greatly improved, when one of the tables can be broadcasted, by creating a range index. A range index is basically a sorted map containing the rows of the smaller table, indexed by both the high and low keys. using this structure the complexity of the query would go from O(N * M) to O(N * 2 * LOG(M)), N = number of records in the larger table, M = number of records in the smaller (indexed) table.
> I have created a pull request for this. According to the [Spark SQL: Relational Data Processing in Spark|http://people.csail.mit.edu/matei/papers/2015/sigmod_spark_sql.pdf] paper similar work (page 11, section 7.2) has already been done by the ADAM project (cannot locate the code though). 
> Any comments and/or feedback are greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org