You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jason Guo (JIRA)" <ji...@apache.org> on 2019/05/21 12:04:01 UTC

[jira] [Updated] (SPARK-27792) SkewJoin hint

     [ https://issues.apache.org/jira/browse/SPARK-27792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jason Guo updated SPARK-27792:
------------------------------
    Description: 
This feature is designed to handle data skew in Join

 

*Senario*
 * A big table (tableA) which contains a a few skewed key
 * A small table (tableB) which has no skewed key and is larger than the broadcast threshold 
 * When tableA.join(tableB), a few tasks will be much slower than other tasks because they need to handle the skewed key

 

*Experiment*

tableA has 2 skewed keys 9500048 and 9500096
{code:java}
INSERT OVERWRITE TABLE tableA
SELECT CAST(CASE WHEN id < 908000000 THEN (9500000 + (CAST (RAND() * 2 AS INT) + 1) * 48 )
 ELSE CAST(id/100 AS INT) END AS STRING), 'A'
 name
FROM ids
WHERE id BETWEEN 900000000 AND 1050000000;{code}
tableB has no skewed keys
{code:java}
INSERT OVERWRITE TABLE tableB
SELECT CAST(CAST(id/100 AS INT) AS STRING), 'B'
 name
FROM ids
WHERE id BETWEEN 950000000 AND 950500000;{code}
 

Join them with setting spark.sql.autoBroadcastJoinThreshold to 3000
{code:java}
insert overwrite table result_with_skew
select tableA.id, tableA.value, tableB.value
from tableA
join tableB
on tableA.id=tableB.id;
{code}
 

!image-2019-05-21-20-02-57-056.png!

 

 

 

 

> SkewJoin hint
> -------------
>
>                 Key: SPARK-27792
>                 URL: https://issues.apache.org/jira/browse/SPARK-27792
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>    Affects Versions: 2.4.3
>            Reporter: Jason Guo
>            Priority: Major
>
> This feature is designed to handle data skew in Join
>  
> *Senario*
>  * A big table (tableA) which contains a a few skewed key
>  * A small table (tableB) which has no skewed key and is larger than the broadcast threshold 
>  * When tableA.join(tableB), a few tasks will be much slower than other tasks because they need to handle the skewed key
>  
> *Experiment*
> tableA has 2 skewed keys 9500048 and 9500096
> {code:java}
> INSERT OVERWRITE TABLE tableA
> SELECT CAST(CASE WHEN id < 908000000 THEN (9500000 + (CAST (RAND() * 2 AS INT) + 1) * 48 )
>  ELSE CAST(id/100 AS INT) END AS STRING), 'A'
>  name
> FROM ids
> WHERE id BETWEEN 900000000 AND 1050000000;{code}
> tableB has no skewed keys
> {code:java}
> INSERT OVERWRITE TABLE tableB
> SELECT CAST(CAST(id/100 AS INT) AS STRING), 'B'
>  name
> FROM ids
> WHERE id BETWEEN 950000000 AND 950500000;{code}
>  
> Join them with setting spark.sql.autoBroadcastJoinThreshold to 3000
> {code:java}
> insert overwrite table result_with_skew
> select tableA.id, tableA.value, tableB.value
> from tableA
> join tableB
> on tableA.id=tableB.id;
> {code}
>  
> !image-2019-05-21-20-02-57-056.png!
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org