You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jason Guo (JIRA)" <ji...@apache.org> on 2018/08/07 00:41:00 UTC

[jira] [Created] (SPARK-25038) Accelerate Spark Plan generation when Spark SQL read large amount of data

Jason Guo created SPARK-25038:
---------------------------------

             Summary: Accelerate Spark Plan generation when Spark SQL read large amount of data
                 Key: SPARK-25038
                 URL: https://issues.apache.org/jira/browse/SPARK-25038
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 2.3.1
            Reporter: Jason Guo


When Spark SQL read large amount of data, it take a long time (more than 10 minutes) to generate physical Plan and then ActiveJob

 

Example:

There is a table which is partitioned by date and hour. There are more than 13 TB data each hour and 185 TB per day. When we just issue a very simple SQL, it take a long time to generate ActiveJob

 

The SQL statement is
{code:java}
select count(device_id) from test_tbl where date=20180731 and hour='21';
{code}
 

The SQL is issued at 2018-08-05 18:33:21

!image-2018-08-07-08-38-01-984.png!

However, the job is submitted at 2018-08-05 18:34:45, which is 1minutes and 24 seconds later than the SQL issue time

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org