You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Bruno Faria <br...@hotmail.com> on 2019/03/04 02:51:41 UTC

Shuffle service with more than one executor

Hi,

I have a spark standalone cluster running on Kubernetes With anti-affinity for network performance.

I’d like to enable spark dynamic allocation and for this I need to enable shuffle services but Looks like I can’t do that running more than one worker instance on the same worker. Is there a way to accomplish that or I should create one worker per pod?

Thanks