You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Wang, Gang (Jira)" <ji...@apache.org> on 2019/09/20 12:51:00 UTC

[jira] [Created] (SPARK-29189) Add an option to ignore block locations when listing file

Wang, Gang created SPARK-29189:
----------------------------------

             Summary: Add an option to ignore block locations when listing file
                 Key: SPARK-29189
                 URL: https://issues.apache.org/jira/browse/SPARK-29189
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 2.4.0
            Reporter: Wang, Gang


In our PROD env, we have a pure Spark cluster, I think this is also pretty common, where computation is separated from storage layer. In such deploy mode, data locality is never reachable. 
 And there are some configurations in Spark scheduler to reduce waiting time for data locality(e.g. "spark.locality.wait"). While, problem is that, in listing file phase, the location informations of all the files, with all the blocks inside each file, are all fetched from the distributed file system. Actually, in a PROD environment, a table can be so huge that even fetching all these location informations need take tens of seconds.
 To improve such scenario, Spark need provide an option, where data locality can be totally ignored, all we need in the listing file phase are the files locations, without any block location informations.

 

And we made a benchmark in our PROD env, after ignore the block locations, we got a pretty huge improvement.
||Table Size||Total File Number||Total Block Number||List File With Block Location Duration||List File Without Block Location Duration||
|22.6T|30000|120000|16.841s|1.730s|
|28.8 T|42001|148964|10.099s|2.858s|
|3.4 T|20000| 20000|5.833s|4.881s|

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org