You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Vadim Semenov <va...@datadoghq.com> on 2016/08/27 02:40:02 UTC
Dynamically change executors settings
Hi spark users,
I wonder if it's possible to change executors settings on-the-fly.
I have the following use-case: I have a lot of non-splittable skewed files
in a custom format that I read using a custom Hadoop RecordReader. These
files can be small & huge and I'd like to use only one-two cores per
executor while they get processed (to use the whole heap). But once they
got processed I'd like to enable all cores.
I know that I can achieve this by splitting it into two separate jobs but I
wonder if it's possible to somehow achieve the behavior I described.
Thanks!
Re: Dynamically change executors settings
Posted by li...@gmail.com.
Hi,
No, currently you can't change the setting.
// maropu
2016/08/27 11:40、Vadim Semenov <va...@datadoghq.com> のメッセージ:
> Hi spark users,
>
> I wonder if it's possible to change executors settings on-the-fly.
> I have the following use-case: I have a lot of non-splittable skewed files in a custom format that I read using a custom Hadoop RecordReader. These files can be small & huge and I'd like to use only one-two cores per executor while they get processed (to use the whole heap). But once they got processed I'd like to enable all cores.
> I know that I can achieve this by splitting it into two separate jobs but I wonder if it's possible to somehow achieve the behavior I described.
>
> Thanks!
---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org