You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by 牛兆捷 <nz...@gmail.com> on 2014/09/13 05:00:07 UTC

workload for spark

We know some memory of spark are used for computing (e.g., shuffle buffer)
and some are used for caching RDD for future use.
Is there any existing workload which utilize both of them? I want to do
some performance study by adjusting the ratio between them.