You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Peter Liu <pe...@gmail.com> on 2018/10/11 19:35:05 UTC

re: yarn resource overcommit: cpu / vcores

Hi there,

is there any best practice guideline on yarn resource overcommit with cpu /
vcores, such as yarn config options, candidate cases ideal for
overcommiting vcores etc.?

this slide below (from 2016) seems to address the memory overcommit topic
and hint a "future" topic on cpu overcommit:
https://www.slideshare.net/HadoopSummit/investing-the-effects-of-overcommitting-yarn-resources

any help/hint would be very much appreciated!

Regards,

Peter

FYI:
I have a system with 80 vcores and a relatively light spark streaming
workload. overcomming the vocore resource (here 100) seems to help the
average spark batch time. need more understanding on this practice.
Skylake (1 x 900K msg/sec) total batch# (avg) avg batch time in ms (avg) avg
user cpu (%) nw read (mb/sec)
70vocres 178.20 8154.69 n/a n/a
80vocres 177.40 7865.44 27.85 222.31
100vcores 177.00 7,209.37 30.02 220.86