You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Gopal Nagar (JIRA)" <ji...@apache.org> on 2016/12/09 13:58:58 UTC

[jira] [Created] (SPARK-18804) Join doesn't work in Spark on Bigger tables

Gopal Nagar created SPARK-18804:
-----------------------------------

             Summary: Join doesn't work in Spark on Bigger tables
                 Key: SPARK-18804
                 URL: https://issues.apache.org/jira/browse/SPARK-18804
             Project: Spark
          Issue Type: Bug
          Components: Input/Output
    Affects Versions: 1.6.1
            Reporter: Gopal Nagar


Hi All,

Spark1.6.1 has been installed on a AWS EMR 3 node cluster which has 32 GB RAM and 80 GB storage each node. I am trying to join two tables (1.2 GB & 900 MB ) have rows 4607818 & 14273378 respectively. It's running in client mode on Yarn cluster manager.

If i put the limit as 100 in select query it works fine. But if i try to join on entire data set, Query runs for 3-4 hours and finally gets terminated. I can see always 18 GB free on each nodes.

I have tried increasing no of executers/cores/partitions. But still doesn't work. This has been tried in PySpark and submitted using Spark Submit command but doesn't run. Please advise.


Join Query 
--------------
select * FROM table1 as t1 join table2 as t2 on t1.col = t2.col limit 100;





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org