You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Davies Liu (JIRA)" <ji...@apache.org> on 2015/09/11 22:44:47 UTC

[jira] [Created] (SPARK-10572) Investigate the contentions bewteen tasks in the same executor

Davies Liu created SPARK-10572:
----------------------------------

             Summary: Investigate the contentions bewteen tasks in the same executor
                 Key: SPARK-10572
                 URL: https://issues.apache.org/jira/browse/SPARK-10572
             Project: Spark
          Issue Type: Task
            Reporter: Davies Liu


According to the benchmark results Jesse F Chen, It's surprised to see there are so much difference (4X) in term of number of executors, we should investigate the reason.

```
> Just be curious how the difference would be if you use 20 executors
> and 20G memory for each executor..

So I tried the following combinations:

(GB X # executors)  (query response time in secs)
20X20	415
10X40	230
5X80	141
4X100	128
2X200	104

CPU utilization is high so spreading more JVMs onto more vCores helps in this case.
For other workloads where memory utilization outweighs CPU, i can see larger JVM
sizes maybe more beneficial. It's for sure case-by-case.

Seems overhead for codegen and scheduler overhead are negligible.
```
https://www.mail-archive.com/user@spark.apache.org/msg36486.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org