You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Holden Karau <ho...@pigscanfly.ca> on 2023/08/07 20:26:17 UTC

Improving Dynamic Allocation Logic for Spark 4+

So I wondering if there is interesting in revisiting some of how Spark is
doing it's dynamica allocation for Spark 4+?

Some things that I've been thinking about:

- Advisory user input (e.g. a way to say after X is done I know I need Y
where Y might be a bunch of GPU machines)
- Configurable tolerance (e.g. if we have at most Z% over target no-op)
- Past runs of same job (e.g. stage X of job Y had a peak of K)
- Faster executor launches (I'm a little fuzzy on what we can do here but,
one area for example is we setup and tear down an RPC connection to the
driver with a blocking call which does seem to have some locking inside of
the driver at first glance)

Is this an area other folks are thinking about? Should I make an epic we
can track ideas in? Or are folks generally happy with today's dynamic
allocation (or just busy with other things)?

-- 
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.):
https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Mich Talebzadeh <mi...@gmail.com>.
On this subject of launching both the driver and the executors using lazy
executor IDs, this can introduce complexity but potentially could be a
viable strategy in certain scenarios. Basically your mileage varies

Pros:

   1. Faster Startup: launching the driver and initial executors
   simultaneously can reduce startup time not waiting for the driver to
   allocate executor IDs dynamically.
   2. Better workload distribution byy running initial executors alongside
   the driver.
   3. Simplified Configuration: Preallocation of resources

Cons:

   1. Complexity:
   2. Resource Overhead: Contention at the cluster start-up time.
   3. If the driver dies, all resources will be wasted. The executors will
   be waiting and have to be terminated manually.

HTH

On Thu, 24 Aug 2023 at 03:07, Holden Karau <ho...@pigscanfly.ca> wrote:

> One option could be to initially launch both drivers and initial executors
> (using the lazy executor ID allocation), but it would introduce a lot of
> complexity.
>
> On Wed, Aug 23, 2023 at 6:44 PM Qian Sun <qi...@gmail.com> wrote:
>
>> Hi Mich
>>
>> I agree with your opinion that the startup time of the Spark on
>> Kubernetes cluster needs to be improved.
>>
>> Regarding the fetching image directly, I have utilized ImageCache to
>> store the images on the node, eliminating the time required to pull images
>> from a remote repository, which does indeed lead to a reduction in
>> overall time, and the effect becomes more pronounced as the size of the
>> image increases.
>>
>> Additionally, I have observed that the driver pod takes a significant
>> amount of time from running to attempting to create executor pods, with an
>> estimated time expenditure of around 75%. We can also explore optimization
>> options in this area.
>>
>> On Thu, Aug 24, 2023 at 12:58 AM Mich Talebzadeh <
>> mich.talebzadeh@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> On this conversion, one of the issues I brought up was the driver
>>> start-up time. This is especially true in k8s. As spark on k8s is modeled
>>> on Spark on standalone schedler, Spark on k8s consist of a
>>> single-driver pod (as master on standalone”) and a  number of executors
>>> (“workers”). When executed on k8s, the driver and executors are
>>> executed on separate pods
>>> <https://spark.apache.org/docs/latest/running-on-kubernetes.html>. First
>>> the driver pod is launched, then the driver pod itself launches the
>>> executor pods. From my observation, in an auto scaling cluster, the driver
>>> pod may take up to 40 seconds followed by executor pods. This is a
>>> considerable time for customers and it is painfully slow. Can we actually
>>> move away from dependency on standalone mode and try to speed up k8s
>>> cluster formation.
>>>
>>> Another naive question, when the docker image is pulled from the
>>> container registry to the driver itself, this takes finite time. The docker
>>> image for executors could be different from that of the driver
>>> docker image. Since spark-submit presents this at the time of submission,
>>> can we save time by fetching the docker images straight away?
>>>
>>> Thanks
>>>
>>> Mich
>>>
>>>
>>>    view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Tue, 8 Aug 2023 at 18:25, Mich Talebzadeh <mi...@gmail.com>
>>> wrote:
>>>
>>>> Splendid idea. 👍
>>>>
>>>> Mich Talebzadeh,
>>>> Solutions Architect/Engineering Lead
>>>> London
>>>> United Kingdom
>>>>
>>>>
>>>>    view my Linkedin profile
>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>
>>>>
>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca> wrote:
>>>>
>>>>> The driver it’s self is probably another topic, perhaps I’ll make a
>>>>> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>>>>>
>>>>> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>
>>>>>> From my own perspective faster execution time especially with Spark
>>>>>> on tin boxes (Dataproc & EC2) and Spark on k8s is something that customers
>>>>>> often bring up.
>>>>>>
>>>>>> Poor time to onboard with autoscaling seems to be particularly
>>>>>> singled out for heavy ETL jobs that use Spark. I am disappointed to see the
>>>>>> poor performance of Spark on k8s autopilot with timelines starting the
>>>>>> driver itself and moving from Pending to Running phase (Spark 4.3.1 with
>>>>>> Java 11)
>>>>>>
>>>>>> HTH
>>>>>>
>>>>>> Mich Talebzadeh,
>>>>>> Solutions Architect/Engineering Lead
>>>>>> London
>>>>>> United Kingdom
>>>>>>
>>>>>>
>>>>>>    view my Linkedin profile
>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>
>>>>>>
>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>> arise from relying on this email's technical content is explicitly
>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>> arising from such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>>>>>
>>>>>>> +1 to enhancements in DEA. Long time due!
>>>>>>>
>>>>>>> There were a few things that I was thinking along the same lines for
>>>>>>> some time now(few overlap with @holden 's points)
>>>>>>> 1. How to reduce wastage on the RM side? Sometimes the driver asks
>>>>>>> for some units of resources. But when RM provisions them, the driver
>>>>>>> cancels it.
>>>>>>> 2. How to make the resource available when it is needed.
>>>>>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer to
>>>>>>> choose between cost and runtime. Sometimes developers might be ok to pay
>>>>>>> higher costs for faster execution.
>>>>>>> 4. Stitch resource profile choices into query execution.
>>>>>>> 5. Allow different DEA algo to be chosen for different queries
>>>>>>> within the same spark application.
>>>>>>> 6. Fall back to default algo, when things go haywire!
>>>>>>>
>>>>>>> Model-based learning would be awesome.
>>>>>>> These can be fine-tuned with some tools like sparklens.
>>>>>>>
>>>>>>> I am aware of a few experiments carried out in this area by
>>>>>>> my friends in this domain. One lesson we had was, it is hard to have a
>>>>>>> generic algorithm that worked for all cases.
>>>>>>>
>>>>>>> Regards
>>>>>>> kalyan.
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>
>>>>>>>> Thanks for pointing out this feature to me. I will have a look when
>>>>>>>> I get there.
>>>>>>>>
>>>>>>>> Mich Talebzadeh,
>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>> London
>>>>>>>> United Kingdom
>>>>>>>>
>>>>>>>>
>>>>>>>>    view my Linkedin profile
>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>
>>>>>>>>
>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:
>>>>>>>>
>>>>>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in the `
>>>>>>>>> ShuffleDriverComponents` which indicate whether writing  shuffle
>>>>>>>>> data to a distributed filesystem or persisting it in a remote shuffle
>>>>>>>>> service.
>>>>>>>>>
>>>>>>>>> Uniffle is a general purpose remote shuffle service (
>>>>>>>>> https://github.com/apache/incubator-uniffle).  It can enhance the
>>>>>>>>> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>>>>>
>>>>>>>>> If you have interest about more details of Uniffle, you can  see
>>>>>>>>> [2]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>>>>>
>>>>>>>>> [2]
>>>>>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for
>>>>>>>>> Spark 4+
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On the subject of dynamic allocation, is the following message a
>>>>>>>>> cause for concern when running Spark on k8s?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled
>>>>>>>>> without a shuffle service.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>
>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>
>>>>>>>>> London
>>>>>>>>>
>>>>>>>>> United Kingdom
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    view my Linkedin profile
>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>>>>>> getting the driver going in a timely manner
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>>>>>
>>>>>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> This is on spark 3.4.1 with Java 11 both the host running
>>>>>>>>> spark-submit and the docker itself
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I am not sure how relevant this is to this discussion but it looks
>>>>>>>>> like a kind of blocker for now. What config params can help here and what
>>>>>>>>> can be done?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>
>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>
>>>>>>>>> London
>>>>>>>>>
>>>>>>>>> United Kingdom
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    view my Linkedin profile
>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Oh great point
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Thanks Holden for bringing this up!
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Maybe another thing to think about is how to make dynamic
>>>>>>>>> allocation more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> So I wondering if there is interesting in revisiting some of how
>>>>>>>>> Spark is doing it's dynamica allocation for Spark 4+?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Some things that I've been thinking about:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> - Advisory user input (e.g. a way to say after X is done I know I
>>>>>>>>> need Y where Y might be a bunch of GPU machines)
>>>>>>>>>
>>>>>>>>> - Configurable tolerance (e.g. if we have at most Z% over target
>>>>>>>>> no-op)
>>>>>>>>>
>>>>>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>>>>>
>>>>>>>>> - Faster executor launches (I'm a little fuzzy on what we can do
>>>>>>>>> here but, one area for example is we setup and tear down an RPC connection
>>>>>>>>> to the driver with a blocking call which does seem to have some locking
>>>>>>>>> inside of the driver at first glance)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Is this an area other folks are thinking about? Should I make an
>>>>>>>>> epic we can track ideas in? Or are folks generally happy with today's
>>>>>>>>> dynamic allocation (or just busy with other things)?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>
>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>
>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>
>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>
>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>
>>>>>>>>> --
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>
>>>>
>>
>> --
>> Regards,
>> Qian Sun
>>
> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.):
> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Mich Talebzadeh <mi...@gmail.com>.
Another potent issue is that kubernetes autoscaler uses horizontal pod
resource requests to estimate the target cluster capacity which incurs
additional latency for Spark. Simply because the executor pods are not
there until after the driver has requested these executors. With
autoscaling, the resource requests defined for Spark application's pods
will influence the scaling decisions.and what we need for the workload.
This could result in autoscaling decisions that are not optimal for Spark.
Maybe we should invest in the Spark native autoscaler to somehow adjust the
cluster scale when needed.

HTH


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 24 Aug 2023 at 20:04, Holden Karau <ho...@pigscanfly.ca> wrote:

> For now I've filed https://issues.apache.org/jira/browse/SPARK-44951 &
> https://issues.apache.org/jira/browse/SPARK-44950
>
> On Thu, Aug 24, 2023 at 11:54 AM Holden Karau <ho...@pigscanfly.ca>
> wrote:
>
>> So we can launch Spark execs at the same time as the driver (provided we
>> know *enough* to tell the execs who to talk to), we'd need a bit of work to
>> allow the executor to "wait" for the driver to become available (since we'd
>> no longer know for sure it was present first). This could probably help a
>> lot with "smaller" Spark jobs.
>>
>> Another posibility  could also (potentially) explore extending it out to
>> the idea of "warm" Spark execs which are not tied to a particular driver
>> but start and wait for a driver to connect to them to "claim" them.
>>
>>
>> On Thu, Aug 24, 2023 at 8:48 AM Mich Talebzadeh <
>> mich.talebzadeh@gmail.com> wrote:
>>
>>> Thanks both
>>>
>>> This current  model assumes that you will need the driver up and running
>>> before the executors are started. In other words, this is the driver which
>>> is in charge of managing the executors, independent of the scheduler and
>>> talks directly with Kube apiserver to spawn the executors.
>>>
>>> How to
>>>
>>> This diagram of mine below may be incorrect because it assumes that Kube
>>> apiserver asks the Scheduler ->2 to create the driver? Some assume that the
>>> kube apiserver directly creates the driver pod -->3. Then the driver pod
>>> talks to Kube apiserver to request creation of other pods through the
>>> Scheduler. However, this is time consuming. Can the Scheduler create
>>> executor pods at the same as the driver pod?
>>>
>>> [image: gke2.png]
>>>
>>> The idea of lazy executor IDs is potentially helpful. As I understand in
>>> standalone mode, the executor IDs are assigned upfront before they are
>>> launched. In k8s, with lazy executor ID allocation, executor IDs are not
>>> assigned upfront when the executors are launched. Instead, they are
>>> assigned dynamically as tasks are scheduled to run on specific executors.
>>> This means that the executor IDs are assigned only when they are actually
>>> needed to run tasks, rather than in advance. This adds some form of
>>> optimization by reducing the overhead of managing executor IDs for
>>> executors that might not end up running tasks concurrently. Can be
>>> potentially useful as correctly pointed out in dealing with dynamic
>>> workload patterns, where the number of executors may vary based on demand.
>>>
>>> There are two scenarios:
>>>
>>> 1) Conventional k8s cluster.
>>>
>>>    - You choose the hardware(memory, cores). One node will be used for
>>>    the driver pod and others for the executor pods. If you get the hardware
>>>    wrong., you will get errors because your cluster is under specced. Then you
>>>    have to create a more powerful cluster. For example, if you have a large
>>>    docker file, that may not fit into the driver pod memory.
>>>       - Acton ->We ought to have a heuristic advisor to help us
>>>       estimate the correct spec for our k8s nodes before creating the cluster
>>>
>>> 2) autopilot cluster or serverless cluster
>>>
>>>    - You choose the name of the cluster and region and the rest will be
>>>    taken care of. It is not that simple! Your starting node for the driver may
>>>    be inadequate so I have seen the driver recreated, on the assumption that
>>>    the underlying node hardware had to be redone. Although we can scale
>>>    horizontally, there is no way we can scale up the driver pod dynamically. I
>>>    am excluding gang scheduling here because the focus is on one spark
>>>    application only for this study.
>>>
>>> I wait for some comments so we can decide on what additional jiras
>>> required if agreed.
>>>
>>> Mich
>>>
>>>    view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Thu, 24 Aug 2023 at 03:07, Holden Karau <ho...@pigscanfly.ca> wrote:
>>>
>>>> One option could be to initially launch both drivers and initial
>>>> executors (using the lazy executor ID allocation), but it would introduce a
>>>> lot of complexity.
>>>>
>>>> On Wed, Aug 23, 2023 at 6:44 PM Qian Sun <qi...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Mich
>>>>>
>>>>> I agree with your opinion that the startup time of the Spark on
>>>>> Kubernetes cluster needs to be improved.
>>>>>
>>>>> Regarding the fetching image directly, I have utilized ImageCache to
>>>>> store the images on the node, eliminating the time required to pull images
>>>>> from a remote repository, which does indeed lead to a reduction in
>>>>> overall time, and the effect becomes more pronounced as the size of the
>>>>> image increases.
>>>>>
>>>>> Additionally, I have observed that the driver pod takes a significant
>>>>> amount of time from running to attempting to create executor pods, with an
>>>>> estimated time expenditure of around 75%. We can also explore optimization
>>>>> options in this area.
>>>>>
>>>>> On Thu, Aug 24, 2023 at 12:58 AM Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> On this conversion, one of the issues I brought up was the driver
>>>>>> start-up time. This is especially true in k8s. As spark on k8s is modeled
>>>>>> on Spark on standalone schedler, Spark on k8s consist of a
>>>>>> single-driver pod (as master on standalone”) and a  number of executors
>>>>>> (“workers”). When executed on k8s, the driver and executors are
>>>>>> executed on separate pods
>>>>>> <https://spark.apache.org/docs/latest/running-on-kubernetes.html>. First
>>>>>> the driver pod is launched, then the driver pod itself launches the
>>>>>> executor pods. From my observation, in an auto scaling cluster, the driver
>>>>>> pod may take up to 40 seconds followed by executor pods. This is a
>>>>>> considerable time for customers and it is painfully slow. Can we actually
>>>>>> move away from dependency on standalone mode and try to speed up k8s
>>>>>> cluster formation.
>>>>>>
>>>>>> Another naive question, when the docker image is pulled from the
>>>>>> container registry to the driver itself, this takes finite time. The docker
>>>>>> image for executors could be different from that of the driver
>>>>>> docker image. Since spark-submit presents this at the time of submission,
>>>>>> can we save time by fetching the docker images straight away?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> Mich
>>>>>>
>>>>>>
>>>>>>    view my Linkedin profile
>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>
>>>>>>
>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>> arise from relying on this email's technical content is explicitly
>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>> arising from such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, 8 Aug 2023 at 18:25, Mich Talebzadeh <
>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>
>>>>>>> Splendid idea. 👍
>>>>>>>
>>>>>>> Mich Talebzadeh,
>>>>>>> Solutions Architect/Engineering Lead
>>>>>>> London
>>>>>>> United Kingdom
>>>>>>>
>>>>>>>
>>>>>>>    view my Linkedin profile
>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>
>>>>>>>
>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>> arising from such loss, damage or destruction.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> The driver it’s self is probably another topic, perhaps I’ll make a
>>>>>>>> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>>>>>>>>
>>>>>>>> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <
>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> From my own perspective faster execution time especially with
>>>>>>>>> Spark on tin boxes (Dataproc & EC2) and Spark on k8s is something that
>>>>>>>>> customers often bring up.
>>>>>>>>>
>>>>>>>>> Poor time to onboard with autoscaling seems to be particularly
>>>>>>>>> singled out for heavy ETL jobs that use Spark. I am disappointed to see the
>>>>>>>>> poor performance of Spark on k8s autopilot with timelines starting the
>>>>>>>>> driver itself and moving from Pending to Running phase (Spark 4.3.1 with
>>>>>>>>> Java 11)
>>>>>>>>>
>>>>>>>>> HTH
>>>>>>>>>
>>>>>>>>> Mich Talebzadeh,
>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>> London
>>>>>>>>> United Kingdom
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    view my Linkedin profile
>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> +1 to enhancements in DEA. Long time due!
>>>>>>>>>>
>>>>>>>>>> There were a few things that I was thinking along the same lines
>>>>>>>>>> for some time now(few overlap with @holden 's points)
>>>>>>>>>> 1. How to reduce wastage on the RM side? Sometimes the driver
>>>>>>>>>> asks for some units of resources. But when RM provisions them, the driver
>>>>>>>>>> cancels it.
>>>>>>>>>> 2. How to make the resource available when it is needed.
>>>>>>>>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer
>>>>>>>>>> to choose between cost and runtime. Sometimes developers might be ok to pay
>>>>>>>>>> higher costs for faster execution.
>>>>>>>>>> 4. Stitch resource profile choices into query execution.
>>>>>>>>>> 5. Allow different DEA algo to be chosen for different queries
>>>>>>>>>> within the same spark application.
>>>>>>>>>> 6. Fall back to default algo, when things go haywire!
>>>>>>>>>>
>>>>>>>>>> Model-based learning would be awesome.
>>>>>>>>>> These can be fine-tuned with some tools like sparklens.
>>>>>>>>>>
>>>>>>>>>> I am aware of a few experiments carried out in this area by
>>>>>>>>>> my friends in this domain. One lesson we had was, it is hard to have a
>>>>>>>>>> generic algorithm that worked for all cases.
>>>>>>>>>>
>>>>>>>>>> Regards
>>>>>>>>>> kalyan.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Thanks for pointing out this feature to me. I will have a look
>>>>>>>>>>> when I get there.
>>>>>>>>>>>
>>>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>>> London
>>>>>>>>>>> United Kingdom
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>    view my Linkedin profile
>>>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all
>>>>>>>>>>> responsibility for any loss, damage or destruction of data or any other
>>>>>>>>>>> property which may arise from relying on this email's technical content is
>>>>>>>>>>> explicitly disclaimed. The author will in no case be liable for any
>>>>>>>>>>> monetary damages arising from such loss, damage or destruction.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in
>>>>>>>>>>>> the `ShuffleDriverComponents` which indicate whether writing
>>>>>>>>>>>>  shuffle data to a distributed filesystem or persisting it in a remote
>>>>>>>>>>>> shuffle service.
>>>>>>>>>>>>
>>>>>>>>>>>> Uniffle is a general purpose remote shuffle service (
>>>>>>>>>>>> https://github.com/apache/incubator-uniffle).  It can enhance
>>>>>>>>>>>> the experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>>>>>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>>>>>>>>
>>>>>>>>>>>> If you have interest about more details of Uniffle, you can
>>>>>>>>>>>>  see [2]
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>>>>>>>>
>>>>>>>>>>>> [2]
>>>>>>>>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>>>>>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>>>>>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>>>>>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for
>>>>>>>>>>>> Spark 4+
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On the subject of dynamic allocation, is the following message
>>>>>>>>>>>> a cause for concern when running Spark on k8s?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled
>>>>>>>>>>>> without a shuffle service.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>>>>
>>>>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>>>>
>>>>>>>>>>>> London
>>>>>>>>>>>>
>>>>>>>>>>>> United Kingdom
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>    view my Linkedin profile
>>>>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all
>>>>>>>>>>>> responsibility for any loss, damage or destruction of data or any other
>>>>>>>>>>>> property which may arise from relying on this email's technical content is
>>>>>>>>>>>> explicitly disclaimed. The author will in no case be liable for any
>>>>>>>>>>>> monetary damages arising from such loss, damage or destruction.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>>>>>>>>> getting the driver going in a timely manner
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>>>>>>>>
>>>>>>>>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> This is on spark 3.4.1 with Java 11 both the host running
>>>>>>>>>>>> spark-submit and the docker itself
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I am not sure how relevant this is to this discussion but it
>>>>>>>>>>>> looks like a kind of blocker for now. What config params can help here
>>>>>>>>>>>> and what can be done?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>>>>
>>>>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>>>>
>>>>>>>>>>>> London
>>>>>>>>>>>>
>>>>>>>>>>>> United Kingdom
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>    view my Linkedin profile
>>>>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all
>>>>>>>>>>>> responsibility for any loss, damage or destruction of data or any other
>>>>>>>>>>>> property which may arise from relying on this email's technical content is
>>>>>>>>>>>> explicitly disclaimed. The author will in no case be liable for any
>>>>>>>>>>>> monetary damages arising from such loss, damage or destruction.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Oh great point
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks Holden for bringing this up!
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Maybe another thing to think about is how to make dynamic
>>>>>>>>>>>> allocation more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <
>>>>>>>>>>>> holden@pigscanfly.ca> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> So I wondering if there is interesting in revisiting some of
>>>>>>>>>>>> how Spark is doing it's dynamica allocation for Spark 4+?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Some things that I've been thinking about:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> - Advisory user input (e.g. a way to say after X is done I know
>>>>>>>>>>>> I need Y where Y might be a bunch of GPU machines)
>>>>>>>>>>>>
>>>>>>>>>>>> - Configurable tolerance (e.g. if we have at most Z% over
>>>>>>>>>>>> target no-op)
>>>>>>>>>>>>
>>>>>>>>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>>>>>>>>
>>>>>>>>>>>> - Faster executor launches (I'm a little fuzzy on what we can
>>>>>>>>>>>> do here but, one area for example is we setup and tear down an RPC
>>>>>>>>>>>> connection to the driver with a blocking call which does seem to have some
>>>>>>>>>>>> locking inside of the driver at first glance)
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Is this an area other folks are thinking about? Should I make
>>>>>>>>>>>> an epic we can track ideas in? Or are folks generally happy with today's
>>>>>>>>>>>> dynamic allocation (or just busy with other things)?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>>>>
>>>>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>>>>
>>>>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>>>>
>>>>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>>>>
>>>>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>
>>>>>>>
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Qian Sun
>>>>>
>>>> --
>>>> Twitter: https://twitter.com/holdenkarau
>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>
>>>
>>
>> --
>> Twitter: https://twitter.com/holdenkarau
>> Books (Learning Spark, High Performance Spark, etc.):
>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>
>
>
> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.):
> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Holden Karau <ho...@pigscanfly.ca>.
For now I've filed https://issues.apache.org/jira/browse/SPARK-44951 &
https://issues.apache.org/jira/browse/SPARK-44950

On Thu, Aug 24, 2023 at 11:54 AM Holden Karau <ho...@pigscanfly.ca> wrote:

> So we can launch Spark execs at the same time as the driver (provided we
> know *enough* to tell the execs who to talk to), we'd need a bit of work to
> allow the executor to "wait" for the driver to become available (since we'd
> no longer know for sure it was present first). This could probably help a
> lot with "smaller" Spark jobs.
>
> Another posibility  could also (potentially) explore extending it out to
> the idea of "warm" Spark execs which are not tied to a particular driver
> but start and wait for a driver to connect to them to "claim" them.
>
>
> On Thu, Aug 24, 2023 at 8:48 AM Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> Thanks both
>>
>> This current  model assumes that you will need the driver up and running
>> before the executors are started. In other words, this is the driver which
>> is in charge of managing the executors, independent of the scheduler and
>> talks directly with Kube apiserver to spawn the executors.
>>
>> How to
>>
>> This diagram of mine below may be incorrect because it assumes that Kube
>> apiserver asks the Scheduler ->2 to create the driver? Some assume that the
>> kube apiserver directly creates the driver pod -->3. Then the driver pod
>> talks to Kube apiserver to request creation of other pods through the
>> Scheduler. However, this is time consuming. Can the Scheduler create
>> executor pods at the same as the driver pod?
>>
>> [image: gke2.png]
>>
>> The idea of lazy executor IDs is potentially helpful. As I understand in
>> standalone mode, the executor IDs are assigned upfront before they are
>> launched. In k8s, with lazy executor ID allocation, executor IDs are not
>> assigned upfront when the executors are launched. Instead, they are
>> assigned dynamically as tasks are scheduled to run on specific executors.
>> This means that the executor IDs are assigned only when they are actually
>> needed to run tasks, rather than in advance. This adds some form of
>> optimization by reducing the overhead of managing executor IDs for
>> executors that might not end up running tasks concurrently. Can be
>> potentially useful as correctly pointed out in dealing with dynamic
>> workload patterns, where the number of executors may vary based on demand.
>>
>> There are two scenarios:
>>
>> 1) Conventional k8s cluster.
>>
>>    - You choose the hardware(memory, cores). One node will be used for
>>    the driver pod and others for the executor pods. If you get the hardware
>>    wrong., you will get errors because your cluster is under specced. Then you
>>    have to create a more powerful cluster. For example, if you have a large
>>    docker file, that may not fit into the driver pod memory.
>>       - Acton ->We ought to have a heuristic advisor to help us estimate
>>       the correct spec for our k8s nodes before creating the cluster
>>
>> 2) autopilot cluster or serverless cluster
>>
>>    - You choose the name of the cluster and region and the rest will be
>>    taken care of. It is not that simple! Your starting node for the driver may
>>    be inadequate so I have seen the driver recreated, on the assumption that
>>    the underlying node hardware had to be redone. Although we can scale
>>    horizontally, there is no way we can scale up the driver pod dynamically. I
>>    am excluding gang scheduling here because the focus is on one spark
>>    application only for this study.
>>
>> I wait for some comments so we can decide on what additional jiras
>> required if agreed.
>>
>> Mich
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Thu, 24 Aug 2023 at 03:07, Holden Karau <ho...@pigscanfly.ca> wrote:
>>
>>> One option could be to initially launch both drivers and initial
>>> executors (using the lazy executor ID allocation), but it would introduce a
>>> lot of complexity.
>>>
>>> On Wed, Aug 23, 2023 at 6:44 PM Qian Sun <qi...@gmail.com> wrote:
>>>
>>>> Hi Mich
>>>>
>>>> I agree with your opinion that the startup time of the Spark on
>>>> Kubernetes cluster needs to be improved.
>>>>
>>>> Regarding the fetching image directly, I have utilized ImageCache to
>>>> store the images on the node, eliminating the time required to pull images
>>>> from a remote repository, which does indeed lead to a reduction in
>>>> overall time, and the effect becomes more pronounced as the size of the
>>>> image increases.
>>>>
>>>> Additionally, I have observed that the driver pod takes a significant
>>>> amount of time from running to attempting to create executor pods, with an
>>>> estimated time expenditure of around 75%. We can also explore optimization
>>>> options in this area.
>>>>
>>>> On Thu, Aug 24, 2023 at 12:58 AM Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> On this conversion, one of the issues I brought up was the driver
>>>>> start-up time. This is especially true in k8s. As spark on k8s is modeled
>>>>> on Spark on standalone schedler, Spark on k8s consist of a
>>>>> single-driver pod (as master on standalone”) and a  number of executors
>>>>> (“workers”). When executed on k8s, the driver and executors are
>>>>> executed on separate pods
>>>>> <https://spark.apache.org/docs/latest/running-on-kubernetes.html>. First
>>>>> the driver pod is launched, then the driver pod itself launches the
>>>>> executor pods. From my observation, in an auto scaling cluster, the driver
>>>>> pod may take up to 40 seconds followed by executor pods. This is a
>>>>> considerable time for customers and it is painfully slow. Can we actually
>>>>> move away from dependency on standalone mode and try to speed up k8s
>>>>> cluster formation.
>>>>>
>>>>> Another naive question, when the docker image is pulled from the
>>>>> container registry to the driver itself, this takes finite time. The docker
>>>>> image for executors could be different from that of the driver
>>>>> docker image. Since spark-submit presents this at the time of submission,
>>>>> can we save time by fetching the docker images straight away?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Mich
>>>>>
>>>>>
>>>>>    view my Linkedin profile
>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>
>>>>>
>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, 8 Aug 2023 at 18:25, Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>
>>>>>> Splendid idea. 👍
>>>>>>
>>>>>> Mich Talebzadeh,
>>>>>> Solutions Architect/Engineering Lead
>>>>>> London
>>>>>> United Kingdom
>>>>>>
>>>>>>
>>>>>>    view my Linkedin profile
>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>
>>>>>>
>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>> arise from relying on this email's technical content is explicitly
>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>> arising from such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca>
>>>>>> wrote:
>>>>>>
>>>>>>> The driver it’s self is probably another topic, perhaps I’ll make a
>>>>>>> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>>>>>>>
>>>>>>> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <
>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>
>>>>>>>> From my own perspective faster execution time especially with Spark
>>>>>>>> on tin boxes (Dataproc & EC2) and Spark on k8s is something that customers
>>>>>>>> often bring up.
>>>>>>>>
>>>>>>>> Poor time to onboard with autoscaling seems to be particularly
>>>>>>>> singled out for heavy ETL jobs that use Spark. I am disappointed to see the
>>>>>>>> poor performance of Spark on k8s autopilot with timelines starting the
>>>>>>>> driver itself and moving from Pending to Running phase (Spark 4.3.1 with
>>>>>>>> Java 11)
>>>>>>>>
>>>>>>>> HTH
>>>>>>>>
>>>>>>>> Mich Talebzadeh,
>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>> London
>>>>>>>> United Kingdom
>>>>>>>>
>>>>>>>>
>>>>>>>>    view my Linkedin profile
>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>
>>>>>>>>
>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> +1 to enhancements in DEA. Long time due!
>>>>>>>>>
>>>>>>>>> There were a few things that I was thinking along the same lines
>>>>>>>>> for some time now(few overlap with @holden 's points)
>>>>>>>>> 1. How to reduce wastage on the RM side? Sometimes the driver asks
>>>>>>>>> for some units of resources. But when RM provisions them, the driver
>>>>>>>>> cancels it.
>>>>>>>>> 2. How to make the resource available when it is needed.
>>>>>>>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer
>>>>>>>>> to choose between cost and runtime. Sometimes developers might be ok to pay
>>>>>>>>> higher costs for faster execution.
>>>>>>>>> 4. Stitch resource profile choices into query execution.
>>>>>>>>> 5. Allow different DEA algo to be chosen for different queries
>>>>>>>>> within the same spark application.
>>>>>>>>> 6. Fall back to default algo, when things go haywire!
>>>>>>>>>
>>>>>>>>> Model-based learning would be awesome.
>>>>>>>>> These can be fine-tuned with some tools like sparklens.
>>>>>>>>>
>>>>>>>>> I am aware of a few experiments carried out in this area by
>>>>>>>>> my friends in this domain. One lesson we had was, it is hard to have a
>>>>>>>>> generic algorithm that worked for all cases.
>>>>>>>>>
>>>>>>>>> Regards
>>>>>>>>> kalyan.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Thanks for pointing out this feature to me. I will have a look
>>>>>>>>>> when I get there.
>>>>>>>>>>
>>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>> London
>>>>>>>>>> United Kingdom
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>    view my Linkedin profile
>>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all
>>>>>>>>>> responsibility for any loss, damage or destruction of data or any other
>>>>>>>>>> property which may arise from relying on this email's technical content is
>>>>>>>>>> explicitly disclaimed. The author will in no case be liable for any
>>>>>>>>>> monetary damages arising from such loss, damage or destruction.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in
>>>>>>>>>>> the `ShuffleDriverComponents` which indicate whether writing
>>>>>>>>>>>  shuffle data to a distributed filesystem or persisting it in a remote
>>>>>>>>>>> shuffle service.
>>>>>>>>>>>
>>>>>>>>>>> Uniffle is a general purpose remote shuffle service (
>>>>>>>>>>> https://github.com/apache/incubator-uniffle).  It can enhance
>>>>>>>>>>> the experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>>>>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>>>>>>>
>>>>>>>>>>> If you have interest about more details of Uniffle, you can  see
>>>>>>>>>>> [2]
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>>>>>>>
>>>>>>>>>>> [2]
>>>>>>>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>>>>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>>>>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>>>>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for
>>>>>>>>>>> Spark 4+
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On the subject of dynamic allocation, is the following message a
>>>>>>>>>>> cause for concern when running Spark on k8s?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled
>>>>>>>>>>> without a shuffle service.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>>>
>>>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>>>
>>>>>>>>>>> London
>>>>>>>>>>>
>>>>>>>>>>> United Kingdom
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>    view my Linkedin profile
>>>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all
>>>>>>>>>>> responsibility for any loss, damage or destruction of data or any other
>>>>>>>>>>> property which may arise from relying on this email's technical content is
>>>>>>>>>>> explicitly disclaimed. The author will in no case be liable for any
>>>>>>>>>>> monetary damages arising from such loss, damage or destruction.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>>>>>>>> getting the driver going in a timely manner
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>>>>>>>
>>>>>>>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> This is on spark 3.4.1 with Java 11 both the host running
>>>>>>>>>>> spark-submit and the docker itself
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I am not sure how relevant this is to this discussion but it
>>>>>>>>>>> looks like a kind of blocker for now. What config params can help here
>>>>>>>>>>> and what can be done?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>>>
>>>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>>>
>>>>>>>>>>> London
>>>>>>>>>>>
>>>>>>>>>>> United Kingdom
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>    view my Linkedin profile
>>>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all
>>>>>>>>>>> responsibility for any loss, damage or destruction of data or any other
>>>>>>>>>>> property which may arise from relying on this email's technical content is
>>>>>>>>>>> explicitly disclaimed. The author will in no case be liable for any
>>>>>>>>>>> monetary damages arising from such loss, damage or destruction.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Oh great point
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Thanks Holden for bringing this up!
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Maybe another thing to think about is how to make dynamic
>>>>>>>>>>> allocation more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <
>>>>>>>>>>> holden@pigscanfly.ca> wrote:
>>>>>>>>>>>
>>>>>>>>>>> So I wondering if there is interesting in revisiting some of how
>>>>>>>>>>> Spark is doing it's dynamica allocation for Spark 4+?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Some things that I've been thinking about:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> - Advisory user input (e.g. a way to say after X is done I know
>>>>>>>>>>> I need Y where Y might be a bunch of GPU machines)
>>>>>>>>>>>
>>>>>>>>>>> - Configurable tolerance (e.g. if we have at most Z% over target
>>>>>>>>>>> no-op)
>>>>>>>>>>>
>>>>>>>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>>>>>>>
>>>>>>>>>>> - Faster executor launches (I'm a little fuzzy on what we can do
>>>>>>>>>>> here but, one area for example is we setup and tear down an RPC connection
>>>>>>>>>>> to the driver with a blocking call which does seem to have some locking
>>>>>>>>>>> inside of the driver at first glance)
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Is this an area other folks are thinking about? Should I make an
>>>>>>>>>>> epic we can track ideas in? Or are folks generally happy with today's
>>>>>>>>>>> dynamic allocation (or just busy with other things)?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>>>
>>>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>>>
>>>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>>>
>>>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>>>
>>>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>
>>>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Qian Sun
>>>>
>>> --
>>> Twitter: https://twitter.com/holdenkarau
>>> Books (Learning Spark, High Performance Spark, etc.):
>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>
>>
>
> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.):
> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>


-- 
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.):
https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Holden Karau <ho...@pigscanfly.ca>.
So we can launch Spark execs at the same time as the driver (provided we
know *enough* to tell the execs who to talk to), we'd need a bit of work to
allow the executor to "wait" for the driver to become available (since we'd
no longer know for sure it was present first). This could probably help a
lot with "smaller" Spark jobs.

Another posibility  could also (potentially) explore extending it out to
the idea of "warm" Spark execs which are not tied to a particular driver
but start and wait for a driver to connect to them to "claim" them.


On Thu, Aug 24, 2023 at 8:48 AM Mich Talebzadeh <mi...@gmail.com>
wrote:

> Thanks both
>
> This current  model assumes that you will need the driver up and running
> before the executors are started. In other words, this is the driver which
> is in charge of managing the executors, independent of the scheduler and
> talks directly with Kube apiserver to spawn the executors.
>
> How to
>
> This diagram of mine below may be incorrect because it assumes that Kube
> apiserver asks the Scheduler ->2 to create the driver? Some assume that the
> kube apiserver directly creates the driver pod -->3. Then the driver pod
> talks to Kube apiserver to request creation of other pods through the
> Scheduler. However, this is time consuming. Can the Scheduler create
> executor pods at the same as the driver pod?
>
> [image: gke2.png]
>
> The idea of lazy executor IDs is potentially helpful. As I understand in
> standalone mode, the executor IDs are assigned upfront before they are
> launched. In k8s, with lazy executor ID allocation, executor IDs are not
> assigned upfront when the executors are launched. Instead, they are
> assigned dynamically as tasks are scheduled to run on specific executors.
> This means that the executor IDs are assigned only when they are actually
> needed to run tasks, rather than in advance. This adds some form of
> optimization by reducing the overhead of managing executor IDs for
> executors that might not end up running tasks concurrently. Can be
> potentially useful as correctly pointed out in dealing with dynamic
> workload patterns, where the number of executors may vary based on demand.
>
> There are two scenarios:
>
> 1) Conventional k8s cluster.
>
>    - You choose the hardware(memory, cores). One node will be used for
>    the driver pod and others for the executor pods. If you get the hardware
>    wrong., you will get errors because your cluster is under specced. Then you
>    have to create a more powerful cluster. For example, if you have a large
>    docker file, that may not fit into the driver pod memory.
>       - Acton ->We ought to have a heuristic advisor to help us estimate
>       the correct spec for our k8s nodes before creating the cluster
>
> 2) autopilot cluster or serverless cluster
>
>    - You choose the name of the cluster and region and the rest will be
>    taken care of. It is not that simple! Your starting node for the driver may
>    be inadequate so I have seen the driver recreated, on the assumption that
>    the underlying node hardware had to be redone. Although we can scale
>    horizontally, there is no way we can scale up the driver pod dynamically. I
>    am excluding gang scheduling here because the focus is on one spark
>    application only for this study.
>
> I wait for some comments so we can decide on what additional jiras
> required if agreed.
>
> Mich
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Thu, 24 Aug 2023 at 03:07, Holden Karau <ho...@pigscanfly.ca> wrote:
>
>> One option could be to initially launch both drivers and initial
>> executors (using the lazy executor ID allocation), but it would introduce a
>> lot of complexity.
>>
>> On Wed, Aug 23, 2023 at 6:44 PM Qian Sun <qi...@gmail.com> wrote:
>>
>>> Hi Mich
>>>
>>> I agree with your opinion that the startup time of the Spark on
>>> Kubernetes cluster needs to be improved.
>>>
>>> Regarding the fetching image directly, I have utilized ImageCache to
>>> store the images on the node, eliminating the time required to pull images
>>> from a remote repository, which does indeed lead to a reduction in
>>> overall time, and the effect becomes more pronounced as the size of the
>>> image increases.
>>>
>>> Additionally, I have observed that the driver pod takes a significant
>>> amount of time from running to attempting to create executor pods, with an
>>> estimated time expenditure of around 75%. We can also explore optimization
>>> options in this area.
>>>
>>> On Thu, Aug 24, 2023 at 12:58 AM Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com> wrote:
>>>
>>>> Hi all,
>>>>
>>>> On this conversion, one of the issues I brought up was the driver
>>>> start-up time. This is especially true in k8s. As spark on k8s is modeled
>>>> on Spark on standalone schedler, Spark on k8s consist of a
>>>> single-driver pod (as master on standalone”) and a  number of executors
>>>> (“workers”). When executed on k8s, the driver and executors are
>>>> executed on separate pods
>>>> <https://spark.apache.org/docs/latest/running-on-kubernetes.html>. First
>>>> the driver pod is launched, then the driver pod itself launches the
>>>> executor pods. From my observation, in an auto scaling cluster, the driver
>>>> pod may take up to 40 seconds followed by executor pods. This is a
>>>> considerable time for customers and it is painfully slow. Can we actually
>>>> move away from dependency on standalone mode and try to speed up k8s
>>>> cluster formation.
>>>>
>>>> Another naive question, when the docker image is pulled from the
>>>> container registry to the driver itself, this takes finite time. The docker
>>>> image for executors could be different from that of the driver
>>>> docker image. Since spark-submit presents this at the time of submission,
>>>> can we save time by fetching the docker images straight away?
>>>>
>>>> Thanks
>>>>
>>>> Mich
>>>>
>>>>
>>>>    view my Linkedin profile
>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>
>>>>
>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, 8 Aug 2023 at 18:25, Mich Talebzadeh <mi...@gmail.com>
>>>> wrote:
>>>>
>>>>> Splendid idea. 👍
>>>>>
>>>>> Mich Talebzadeh,
>>>>> Solutions Architect/Engineering Lead
>>>>> London
>>>>> United Kingdom
>>>>>
>>>>>
>>>>>    view my Linkedin profile
>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>
>>>>>
>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca>
>>>>> wrote:
>>>>>
>>>>>> The driver it’s self is probably another topic, perhaps I’ll make a
>>>>>> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>>>>>>
>>>>>> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <
>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>
>>>>>>> From my own perspective faster execution time especially with Spark
>>>>>>> on tin boxes (Dataproc & EC2) and Spark on k8s is something that customers
>>>>>>> often bring up.
>>>>>>>
>>>>>>> Poor time to onboard with autoscaling seems to be particularly
>>>>>>> singled out for heavy ETL jobs that use Spark. I am disappointed to see the
>>>>>>> poor performance of Spark on k8s autopilot with timelines starting the
>>>>>>> driver itself and moving from Pending to Running phase (Spark 4.3.1 with
>>>>>>> Java 11)
>>>>>>>
>>>>>>> HTH
>>>>>>>
>>>>>>> Mich Talebzadeh,
>>>>>>> Solutions Architect/Engineering Lead
>>>>>>> London
>>>>>>> United Kingdom
>>>>>>>
>>>>>>>
>>>>>>>    view my Linkedin profile
>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>
>>>>>>>
>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>> arising from such loss, damage or destruction.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>>>>>>
>>>>>>>> +1 to enhancements in DEA. Long time due!
>>>>>>>>
>>>>>>>> There were a few things that I was thinking along the same lines
>>>>>>>> for some time now(few overlap with @holden 's points)
>>>>>>>> 1. How to reduce wastage on the RM side? Sometimes the driver asks
>>>>>>>> for some units of resources. But when RM provisions them, the driver
>>>>>>>> cancels it.
>>>>>>>> 2. How to make the resource available when it is needed.
>>>>>>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer
>>>>>>>> to choose between cost and runtime. Sometimes developers might be ok to pay
>>>>>>>> higher costs for faster execution.
>>>>>>>> 4. Stitch resource profile choices into query execution.
>>>>>>>> 5. Allow different DEA algo to be chosen for different queries
>>>>>>>> within the same spark application.
>>>>>>>> 6. Fall back to default algo, when things go haywire!
>>>>>>>>
>>>>>>>> Model-based learning would be awesome.
>>>>>>>> These can be fine-tuned with some tools like sparklens.
>>>>>>>>
>>>>>>>> I am aware of a few experiments carried out in this area by
>>>>>>>> my friends in this domain. One lesson we had was, it is hard to have a
>>>>>>>> generic algorithm that worked for all cases.
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> kalyan.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Thanks for pointing out this feature to me. I will have a look
>>>>>>>>> when I get there.
>>>>>>>>>
>>>>>>>>> Mich Talebzadeh,
>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>> London
>>>>>>>>> United Kingdom
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    view my Linkedin profile
>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in the
>>>>>>>>>> `ShuffleDriverComponents` which indicate whether writing
>>>>>>>>>>  shuffle data to a distributed filesystem or persisting it in a remote
>>>>>>>>>> shuffle service.
>>>>>>>>>>
>>>>>>>>>> Uniffle is a general purpose remote shuffle service (
>>>>>>>>>> https://github.com/apache/incubator-uniffle).  It can enhance
>>>>>>>>>> the experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>>>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>>>>>>
>>>>>>>>>> If you have interest about more details of Uniffle, you can  see
>>>>>>>>>> [2]
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>>>>>>
>>>>>>>>>> [2]
>>>>>>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>>>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>>>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>>>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for
>>>>>>>>>> Spark 4+
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On the subject of dynamic allocation, is the following message a
>>>>>>>>>> cause for concern when running Spark on k8s?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled
>>>>>>>>>> without a shuffle service.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>>
>>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>>
>>>>>>>>>> London
>>>>>>>>>>
>>>>>>>>>> United Kingdom
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>    view my Linkedin profile
>>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all
>>>>>>>>>> responsibility for any loss, damage or destruction of data or any other
>>>>>>>>>> property which may arise from relying on this email's technical content is
>>>>>>>>>> explicitly disclaimed. The author will in no case be liable for any
>>>>>>>>>> monetary damages arising from such loss, damage or destruction.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>>>>>>> getting the driver going in a timely manner
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>>>>>>
>>>>>>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> This is on spark 3.4.1 with Java 11 both the host running
>>>>>>>>>> spark-submit and the docker itself
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I am not sure how relevant this is to this discussion but it
>>>>>>>>>> looks like a kind of blocker for now. What config params can help here
>>>>>>>>>> and what can be done?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>>
>>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>>
>>>>>>>>>> London
>>>>>>>>>>
>>>>>>>>>> United Kingdom
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>    view my Linkedin profile
>>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all
>>>>>>>>>> responsibility for any loss, damage or destruction of data or any other
>>>>>>>>>> property which may arise from relying on this email's technical content is
>>>>>>>>>> explicitly disclaimed. The author will in no case be liable for any
>>>>>>>>>> monetary damages arising from such loss, damage or destruction.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Oh great point
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Thanks Holden for bringing this up!
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Maybe another thing to think about is how to make dynamic
>>>>>>>>>> allocation more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> So I wondering if there is interesting in revisiting some of how
>>>>>>>>>> Spark is doing it's dynamica allocation for Spark 4+?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Some things that I've been thinking about:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> - Advisory user input (e.g. a way to say after X is done I know I
>>>>>>>>>> need Y where Y might be a bunch of GPU machines)
>>>>>>>>>>
>>>>>>>>>> - Configurable tolerance (e.g. if we have at most Z% over target
>>>>>>>>>> no-op)
>>>>>>>>>>
>>>>>>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>>>>>>
>>>>>>>>>> - Faster executor launches (I'm a little fuzzy on what we can do
>>>>>>>>>> here but, one area for example is we setup and tear down an RPC connection
>>>>>>>>>> to the driver with a blocking call which does seem to have some locking
>>>>>>>>>> inside of the driver at first glance)
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Is this an area other folks are thinking about? Should I make an
>>>>>>>>>> epic we can track ideas in? Or are folks generally happy with today's
>>>>>>>>>> dynamic allocation (or just busy with other things)?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>>
>>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>>
>>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>>
>>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>>
>>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>>
>>>>>>>>>> --
>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>
>>>>>
>>>
>>> --
>>> Regards,
>>> Qian Sun
>>>
>> --
>> Twitter: https://twitter.com/holdenkarau
>> Books (Learning Spark, High Performance Spark, etc.):
>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>
>

-- 
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.):
https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks both

This current  model assumes that you will need the driver up and running
before the executors are started. In other words, this is the driver which
is in charge of managing the executors, independent of the scheduler and
talks directly with Kube apiserver to spawn the executors.

How to

This diagram of mine below may be incorrect because it assumes that Kube
apiserver asks the Scheduler ->2 to create the driver? Some assume that the
kube apiserver directly creates the driver pod -->3. Then the driver pod
talks to Kube apiserver to request creation of other pods through the
Scheduler. However, this is time consuming. Can the Scheduler create
executor pods at the same as the driver pod?

[image: gke2.png]

The idea of lazy executor IDs is potentially helpful. As I understand in
standalone mode, the executor IDs are assigned upfront before they are
launched. In k8s, with lazy executor ID allocation, executor IDs are not
assigned upfront when the executors are launched. Instead, they are
assigned dynamically as tasks are scheduled to run on specific executors.
This means that the executor IDs are assigned only when they are actually
needed to run tasks, rather than in advance. This adds some form of
optimization by reducing the overhead of managing executor IDs for
executors that might not end up running tasks concurrently. Can be
potentially useful as correctly pointed out in dealing with dynamic
workload patterns, where the number of executors may vary based on demand.

There are two scenarios:

1) Conventional k8s cluster.

   - You choose the hardware(memory, cores). One node will be used for the
   driver pod and others for the executor pods. If you get the hardware
   wrong., you will get errors because your cluster is under specced. Then you
   have to create a more powerful cluster. For example, if you have a large
   docker file, that may not fit into the driver pod memory.
      - Acton ->We ought to have a heuristic advisor to help us estimate
      the correct spec for our k8s nodes before creating the cluster

2) autopilot cluster or serverless cluster

   - You choose the name of the cluster and region and the rest will be
   taken care of. It is not that simple! Your starting node for the driver may
   be inadequate so I have seen the driver recreated, on the assumption that
   the underlying node hardware had to be redone. Although we can scale
   horizontally, there is no way we can scale up the driver pod dynamically. I
   am excluding gang scheduling here because the focus is on one spark
   application only for this study.

I wait for some comments so we can decide on what additional jiras
required if agreed.

Mich

   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 24 Aug 2023 at 03:07, Holden Karau <ho...@pigscanfly.ca> wrote:

> One option could be to initially launch both drivers and initial executors
> (using the lazy executor ID allocation), but it would introduce a lot of
> complexity.
>
> On Wed, Aug 23, 2023 at 6:44 PM Qian Sun <qi...@gmail.com> wrote:
>
>> Hi Mich
>>
>> I agree with your opinion that the startup time of the Spark on
>> Kubernetes cluster needs to be improved.
>>
>> Regarding the fetching image directly, I have utilized ImageCache to
>> store the images on the node, eliminating the time required to pull images
>> from a remote repository, which does indeed lead to a reduction in
>> overall time, and the effect becomes more pronounced as the size of the
>> image increases.
>>
>> Additionally, I have observed that the driver pod takes a significant
>> amount of time from running to attempting to create executor pods, with an
>> estimated time expenditure of around 75%. We can also explore optimization
>> options in this area.
>>
>> On Thu, Aug 24, 2023 at 12:58 AM Mich Talebzadeh <
>> mich.talebzadeh@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> On this conversion, one of the issues I brought up was the driver
>>> start-up time. This is especially true in k8s. As spark on k8s is modeled
>>> on Spark on standalone schedler, Spark on k8s consist of a
>>> single-driver pod (as master on standalone”) and a  number of executors
>>> (“workers”). When executed on k8s, the driver and executors are
>>> executed on separate pods
>>> <https://spark.apache.org/docs/latest/running-on-kubernetes.html>. First
>>> the driver pod is launched, then the driver pod itself launches the
>>> executor pods. From my observation, in an auto scaling cluster, the driver
>>> pod may take up to 40 seconds followed by executor pods. This is a
>>> considerable time for customers and it is painfully slow. Can we actually
>>> move away from dependency on standalone mode and try to speed up k8s
>>> cluster formation.
>>>
>>> Another naive question, when the docker image is pulled from the
>>> container registry to the driver itself, this takes finite time. The docker
>>> image for executors could be different from that of the driver
>>> docker image. Since spark-submit presents this at the time of submission,
>>> can we save time by fetching the docker images straight away?
>>>
>>> Thanks
>>>
>>> Mich
>>>
>>>
>>>    view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Tue, 8 Aug 2023 at 18:25, Mich Talebzadeh <mi...@gmail.com>
>>> wrote:
>>>
>>>> Splendid idea. 👍
>>>>
>>>> Mich Talebzadeh,
>>>> Solutions Architect/Engineering Lead
>>>> London
>>>> United Kingdom
>>>>
>>>>
>>>>    view my Linkedin profile
>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>
>>>>
>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca> wrote:
>>>>
>>>>> The driver it’s self is probably another topic, perhaps I’ll make a
>>>>> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>>>>>
>>>>> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>
>>>>>> From my own perspective faster execution time especially with Spark
>>>>>> on tin boxes (Dataproc & EC2) and Spark on k8s is something that customers
>>>>>> often bring up.
>>>>>>
>>>>>> Poor time to onboard with autoscaling seems to be particularly
>>>>>> singled out for heavy ETL jobs that use Spark. I am disappointed to see the
>>>>>> poor performance of Spark on k8s autopilot with timelines starting the
>>>>>> driver itself and moving from Pending to Running phase (Spark 4.3.1 with
>>>>>> Java 11)
>>>>>>
>>>>>> HTH
>>>>>>
>>>>>> Mich Talebzadeh,
>>>>>> Solutions Architect/Engineering Lead
>>>>>> London
>>>>>> United Kingdom
>>>>>>
>>>>>>
>>>>>>    view my Linkedin profile
>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>
>>>>>>
>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>> arise from relying on this email's technical content is explicitly
>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>> arising from such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>>>>>
>>>>>>> +1 to enhancements in DEA. Long time due!
>>>>>>>
>>>>>>> There were a few things that I was thinking along the same lines for
>>>>>>> some time now(few overlap with @holden 's points)
>>>>>>> 1. How to reduce wastage on the RM side? Sometimes the driver asks
>>>>>>> for some units of resources. But when RM provisions them, the driver
>>>>>>> cancels it.
>>>>>>> 2. How to make the resource available when it is needed.
>>>>>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer to
>>>>>>> choose between cost and runtime. Sometimes developers might be ok to pay
>>>>>>> higher costs for faster execution.
>>>>>>> 4. Stitch resource profile choices into query execution.
>>>>>>> 5. Allow different DEA algo to be chosen for different queries
>>>>>>> within the same spark application.
>>>>>>> 6. Fall back to default algo, when things go haywire!
>>>>>>>
>>>>>>> Model-based learning would be awesome.
>>>>>>> These can be fine-tuned with some tools like sparklens.
>>>>>>>
>>>>>>> I am aware of a few experiments carried out in this area by
>>>>>>> my friends in this domain. One lesson we had was, it is hard to have a
>>>>>>> generic algorithm that worked for all cases.
>>>>>>>
>>>>>>> Regards
>>>>>>> kalyan.
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>
>>>>>>>> Thanks for pointing out this feature to me. I will have a look when
>>>>>>>> I get there.
>>>>>>>>
>>>>>>>> Mich Talebzadeh,
>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>> London
>>>>>>>> United Kingdom
>>>>>>>>
>>>>>>>>
>>>>>>>>    view my Linkedin profile
>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>
>>>>>>>>
>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:
>>>>>>>>
>>>>>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in the `
>>>>>>>>> ShuffleDriverComponents` which indicate whether writing  shuffle
>>>>>>>>> data to a distributed filesystem or persisting it in a remote shuffle
>>>>>>>>> service.
>>>>>>>>>
>>>>>>>>> Uniffle is a general purpose remote shuffle service (
>>>>>>>>> https://github.com/apache/incubator-uniffle).  It can enhance the
>>>>>>>>> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>>>>>
>>>>>>>>> If you have interest about more details of Uniffle, you can  see
>>>>>>>>> [2]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>>>>>
>>>>>>>>> [2]
>>>>>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for
>>>>>>>>> Spark 4+
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On the subject of dynamic allocation, is the following message a
>>>>>>>>> cause for concern when running Spark on k8s?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled
>>>>>>>>> without a shuffle service.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>
>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>
>>>>>>>>> London
>>>>>>>>>
>>>>>>>>> United Kingdom
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    view my Linkedin profile
>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>>>>>> getting the driver going in a timely manner
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>>>>>
>>>>>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> This is on spark 3.4.1 with Java 11 both the host running
>>>>>>>>> spark-submit and the docker itself
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I am not sure how relevant this is to this discussion but it looks
>>>>>>>>> like a kind of blocker for now. What config params can help here and what
>>>>>>>>> can be done?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>
>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>
>>>>>>>>> London
>>>>>>>>>
>>>>>>>>> United Kingdom
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    view my Linkedin profile
>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Oh great point
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Thanks Holden for bringing this up!
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Maybe another thing to think about is how to make dynamic
>>>>>>>>> allocation more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> So I wondering if there is interesting in revisiting some of how
>>>>>>>>> Spark is doing it's dynamica allocation for Spark 4+?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Some things that I've been thinking about:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> - Advisory user input (e.g. a way to say after X is done I know I
>>>>>>>>> need Y where Y might be a bunch of GPU machines)
>>>>>>>>>
>>>>>>>>> - Configurable tolerance (e.g. if we have at most Z% over target
>>>>>>>>> no-op)
>>>>>>>>>
>>>>>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>>>>>
>>>>>>>>> - Faster executor launches (I'm a little fuzzy on what we can do
>>>>>>>>> here but, one area for example is we setup and tear down an RPC connection
>>>>>>>>> to the driver with a blocking call which does seem to have some locking
>>>>>>>>> inside of the driver at first glance)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Is this an area other folks are thinking about? Should I make an
>>>>>>>>> epic we can track ideas in? Or are folks generally happy with today's
>>>>>>>>> dynamic allocation (or just busy with other things)?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>
>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>
>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>
>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>
>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>
>>>>>>>>> --
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>
>>>>
>>
>> --
>> Regards,
>> Qian Sun
>>
> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.):
> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Holden Karau <ho...@pigscanfly.ca>.
One option could be to initially launch both drivers and initial executors
(using the lazy executor ID allocation), but it would introduce a lot of
complexity.

On Wed, Aug 23, 2023 at 6:44 PM Qian Sun <qi...@gmail.com> wrote:

> Hi Mich
>
> I agree with your opinion that the startup time of the Spark on Kubernetes
> cluster needs to be improved.
>
> Regarding the fetching image directly, I have utilized ImageCache to store
> the images on the node, eliminating the time required to pull images from a
> remote repository, which does indeed lead to a reduction in overall time,
> and the effect becomes more pronounced as the size of the image increases.
>
>
> Additionally, I have observed that the driver pod takes a significant
> amount of time from running to attempting to create executor pods, with an
> estimated time expenditure of around 75%. We can also explore optimization
> options in this area.
>
> On Thu, Aug 24, 2023 at 12:58 AM Mich Talebzadeh <
> mich.talebzadeh@gmail.com> wrote:
>
>> Hi all,
>>
>> On this conversion, one of the issues I brought up was the driver
>> start-up time. This is especially true in k8s. As spark on k8s is modeled
>> on Spark on standalone schedler, Spark on k8s consist of a single-driver
>> pod (as master on standalone”) and a  number of executors (“workers”). When executed
>> on k8s, the driver and executors are executed on separate pods
>> <https://spark.apache.org/docs/latest/running-on-kubernetes.html>. First
>> the driver pod is launched, then the driver pod itself launches the
>> executor pods. From my observation, in an auto scaling cluster, the driver
>> pod may take up to 40 seconds followed by executor pods. This is a
>> considerable time for customers and it is painfully slow. Can we actually
>> move away from dependency on standalone mode and try to speed up k8s
>> cluster formation.
>>
>> Another naive question, when the docker image is pulled from the
>> container registry to the driver itself, this takes finite time. The docker
>> image for executors could be different from that of the driver
>> docker image. Since spark-submit presents this at the time of submission,
>> can we save time by fetching the docker images straight away?
>>
>> Thanks
>>
>> Mich
>>
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Tue, 8 Aug 2023 at 18:25, Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>
>>> Splendid idea. 👍
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>    view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca> wrote:
>>>
>>>> The driver it’s self is probably another topic, perhaps I’ll make a
>>>> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>>>>
>>>> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com> wrote:
>>>>
>>>>> From my own perspective faster execution time especially with Spark on
>>>>> tin boxes (Dataproc & EC2) and Spark on k8s is something that customers
>>>>> often bring up.
>>>>>
>>>>> Poor time to onboard with autoscaling seems to be particularly singled
>>>>> out for heavy ETL jobs that use Spark. I am disappointed to see the poor
>>>>> performance of Spark on k8s autopilot with timelines starting the driver
>>>>> itself and moving from Pending to Running phase (Spark 4.3.1 with Java 11)
>>>>>
>>>>> HTH
>>>>>
>>>>> Mich Talebzadeh,
>>>>> Solutions Architect/Engineering Lead
>>>>> London
>>>>> United Kingdom
>>>>>
>>>>>
>>>>>    view my Linkedin profile
>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>
>>>>>
>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>>>>
>>>>>> +1 to enhancements in DEA. Long time due!
>>>>>>
>>>>>> There were a few things that I was thinking along the same lines for
>>>>>> some time now(few overlap with @holden 's points)
>>>>>> 1. How to reduce wastage on the RM side? Sometimes the driver asks
>>>>>> for some units of resources. But when RM provisions them, the driver
>>>>>> cancels it.
>>>>>> 2. How to make the resource available when it is needed.
>>>>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer to
>>>>>> choose between cost and runtime. Sometimes developers might be ok to pay
>>>>>> higher costs for faster execution.
>>>>>> 4. Stitch resource profile choices into query execution.
>>>>>> 5. Allow different DEA algo to be chosen for different queries within
>>>>>> the same spark application.
>>>>>> 6. Fall back to default algo, when things go haywire!
>>>>>>
>>>>>> Model-based learning would be awesome.
>>>>>> These can be fine-tuned with some tools like sparklens.
>>>>>>
>>>>>> I am aware of a few experiments carried out in this area by
>>>>>> my friends in this domain. One lesson we had was, it is hard to have a
>>>>>> generic algorithm that worked for all cases.
>>>>>>
>>>>>> Regards
>>>>>> kalyan.
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>
>>>>>>> Thanks for pointing out this feature to me. I will have a look when
>>>>>>> I get there.
>>>>>>>
>>>>>>> Mich Talebzadeh,
>>>>>>> Solutions Architect/Engineering Lead
>>>>>>> London
>>>>>>> United Kingdom
>>>>>>>
>>>>>>>
>>>>>>>    view my Linkedin profile
>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>
>>>>>>>
>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>> arising from such loss, damage or destruction.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:
>>>>>>>
>>>>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in the `
>>>>>>>> ShuffleDriverComponents` which indicate whether writing  shuffle
>>>>>>>> data to a distributed filesystem or persisting it in a remote shuffle
>>>>>>>> service.
>>>>>>>>
>>>>>>>> Uniffle is a general purpose remote shuffle service (
>>>>>>>> https://github.com/apache/incubator-uniffle).  It can enhance the
>>>>>>>> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>>>>
>>>>>>>> If you have interest about more details of Uniffle, you can  see [2]
>>>>>>>>
>>>>>>>>
>>>>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>>>>
>>>>>>>> [2]
>>>>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for Spark
>>>>>>>> 4+
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On the subject of dynamic allocation, is the following message a
>>>>>>>> cause for concern when running Spark on k8s?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled
>>>>>>>> without a shuffle service.
>>>>>>>>
>>>>>>>>
>>>>>>>> Mich Talebzadeh,
>>>>>>>>
>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>
>>>>>>>> London
>>>>>>>>
>>>>>>>> United Kingdom
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>    view my Linkedin profile
>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>>>>> getting the driver going in a timely manner
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>>>>
>>>>>>>>
>>>>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>>>>
>>>>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> This is on spark 3.4.1 with Java 11 both the host running
>>>>>>>> spark-submit and the docker itself
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> I am not sure how relevant this is to this discussion but it looks
>>>>>>>> like a kind of blocker for now. What config params can help here and what
>>>>>>>> can be done?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Mich Talebzadeh,
>>>>>>>>
>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>
>>>>>>>> London
>>>>>>>>
>>>>>>>> United Kingdom
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>    view my Linkedin profile
>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Oh great point
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:
>>>>>>>>
>>>>>>>> Thanks Holden for bringing this up!
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Maybe another thing to think about is how to make dynamic
>>>>>>>> allocation more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> So I wondering if there is interesting in revisiting some of how
>>>>>>>> Spark is doing it's dynamica allocation for Spark 4+?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Some things that I've been thinking about:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> - Advisory user input (e.g. a way to say after X is done I know I
>>>>>>>> need Y where Y might be a bunch of GPU machines)
>>>>>>>>
>>>>>>>> - Configurable tolerance (e.g. if we have at most Z% over target
>>>>>>>> no-op)
>>>>>>>>
>>>>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>>>>
>>>>>>>> - Faster executor launches (I'm a little fuzzy on what we can do
>>>>>>>> here but, one area for example is we setup and tear down an RPC connection
>>>>>>>> to the driver with a blocking call which does seem to have some locking
>>>>>>>> inside of the driver at first glance)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Is this an area other folks are thinking about? Should I make an
>>>>>>>> epic we can track ideas in? Or are folks generally happy with today's
>>>>>>>> dynamic allocation (or just busy with other things)?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>
>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>
>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>
>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>
>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>
>>>>>>>> --
>>>> Twitter: https://twitter.com/holdenkarau
>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>
>>>
>
> --
> Regards,
> Qian Sun
>
-- 
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.):
https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks Qian for your feedback.

I will have a look

Regards,

Mich


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 28 Aug 2023 at 02:32, Qian Sun <qi...@gmail.com> wrote:

> Hi Mich,
>
> ImageCache is an alibaba cloud ECI feature[1]. An image cache is a
> cluster-level resource that you can use to accelerate the creation of pods
> in different namespaces.
>
> If need to update the spark image, imagecache will be created in the
> cluster. And specify pod annotation to use image cache[2].
>
>
> ref:
> 1.
> https://www.alibabacloud.com/help/en/elastic-container-instance/latest/overview-of-the-image-cache-feature?spm=a2c63.p38356.0.0.19977f3e9Xpq4E#topic-2131957
> 2.
> https://www.alibabacloud.com/help/en/ack/serverless-kubernetes/user-guide/use-image-caches-to-accelerate-the-creation-of-pods#section-3e8-8n8-hdh
>
> On Fri, Aug 25, 2023 at 10:08 PM Mich Talebzadeh <
> mich.talebzadeh@gmail.com> wrote:
>
>> Hi Qian,
>>
>> How in practice have you implemented image caching for the driver and
>> executor pods respectively?
>>
>> Thanks
>>
>> On Thu, 24 Aug 2023 at 02:44, Qian Sun <qi...@gmail.com> wrote:
>>
>>> Hi Mich
>>>
>>> I agree with your opinion that the startup time of the Spark on
>>> Kubernetes cluster needs to be improved.
>>>
>>> Regarding the fetching image directly, I have utilized ImageCache to
>>> store the images on the node, eliminating the time required to pull images
>>> from a remote repository, which does indeed lead to a reduction in
>>> overall time, and the effect becomes more pronounced as the size of the
>>> image increases.
>>>
>>> Additionally, I have observed that the driver pod takes a significant
>>> amount of time from running to attempting to create executor pods, with an
>>> estimated time expenditure of around 75%. We can also explore optimization
>>> options in this area.
>>>
>>> On Thu, Aug 24, 2023 at 12:58 AM Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com> wrote:
>>>
>>>> Hi all,
>>>>
>>>> On this conversion, one of the issues I brought up was the driver
>>>> start-up time. This is especially true in k8s. As spark on k8s is modeled
>>>> on Spark on standalone schedler, Spark on k8s consist of a
>>>> single-driver pod (as master on standalone”) and a  number of executors
>>>> (“workers”). When executed on k8s, the driver and executors are
>>>> executed on separate pods
>>>> <https://spark.apache.org/docs/latest/running-on-kubernetes.html>. First
>>>> the driver pod is launched, then the driver pod itself launches the
>>>> executor pods. From my observation, in an auto scaling cluster, the driver
>>>> pod may take up to 40 seconds followed by executor pods. This is a
>>>> considerable time for customers and it is painfully slow. Can we actually
>>>> move away from dependency on standalone mode and try to speed up k8s
>>>> cluster formation.
>>>>
>>>> Another naive question, when the docker image is pulled from the
>>>> container registry to the driver itself, this takes finite time. The docker
>>>> image for executors could be different from that of the driver
>>>> docker image. Since spark-submit presents this at the time of submission,
>>>> can we save time by fetching the docker images straight away?
>>>>
>>>> Thanks
>>>>
>>>> Mich
>>>>
>>>>
>>>>    view my Linkedin profile
>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>
>>>>
>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, 8 Aug 2023 at 18:25, Mich Talebzadeh <mi...@gmail.com>
>>>> wrote:
>>>>
>>>>> Splendid idea. 👍
>>>>>
>>>>> Mich Talebzadeh,
>>>>> Solutions Architect/Engineering Lead
>>>>> London
>>>>> United Kingdom
>>>>>
>>>>>
>>>>>    view my Linkedin profile
>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>
>>>>>
>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca>
>>>>> wrote:
>>>>>
>>>>>> The driver it’s self is probably another topic, perhaps I’ll make a
>>>>>> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>>>>>>
>>>>>> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <
>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>
>>>>>>> From my own perspective faster execution time especially with Spark
>>>>>>> on tin boxes (Dataproc & EC2) and Spark on k8s is something that customers
>>>>>>> often bring up.
>>>>>>>
>>>>>>> Poor time to onboard with autoscaling seems to be particularly
>>>>>>> singled out for heavy ETL jobs that use Spark. I am disappointed to see the
>>>>>>> poor performance of Spark on k8s autopilot with timelines starting the
>>>>>>> driver itself and moving from Pending to Running phase (Spark 4.3.1 with
>>>>>>> Java 11)
>>>>>>>
>>>>>>> HTH
>>>>>>>
>>>>>>> Mich Talebzadeh,
>>>>>>> Solutions Architect/Engineering Lead
>>>>>>> London
>>>>>>> United Kingdom
>>>>>>>
>>>>>>>
>>>>>>>    view my Linkedin profile
>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>
>>>>>>>
>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>> arising from such loss, damage or destruction.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>>>>>>
>>>>>>>> +1 to enhancements in DEA. Long time due!
>>>>>>>>
>>>>>>>> There were a few things that I was thinking along the same lines
>>>>>>>> for some time now(few overlap with @holden 's points)
>>>>>>>> 1. How to reduce wastage on the RM side? Sometimes the driver asks
>>>>>>>> for some units of resources. But when RM provisions them, the driver
>>>>>>>> cancels it.
>>>>>>>> 2. How to make the resource available when it is needed.
>>>>>>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer
>>>>>>>> to choose between cost and runtime. Sometimes developers might be ok to pay
>>>>>>>> higher costs for faster execution.
>>>>>>>> 4. Stitch resource profile choices into query execution.
>>>>>>>> 5. Allow different DEA algo to be chosen for different queries
>>>>>>>> within the same spark application.
>>>>>>>> 6. Fall back to default algo, when things go haywire!
>>>>>>>>
>>>>>>>> Model-based learning would be awesome.
>>>>>>>> These can be fine-tuned with some tools like sparklens.
>>>>>>>>
>>>>>>>> I am aware of a few experiments carried out in this area by
>>>>>>>> my friends in this domain. One lesson we had was, it is hard to have a
>>>>>>>> generic algorithm that worked for all cases.
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> kalyan.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Thanks for pointing out this feature to me. I will have a look
>>>>>>>>> when I get there.
>>>>>>>>>
>>>>>>>>> Mich Talebzadeh,
>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>> London
>>>>>>>>> United Kingdom
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    view my Linkedin profile
>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in the
>>>>>>>>>> `ShuffleDriverComponents` which indicate whether writing
>>>>>>>>>>  shuffle data to a distributed filesystem or persisting it in a remote
>>>>>>>>>> shuffle service.
>>>>>>>>>>
>>>>>>>>>> Uniffle is a general purpose remote shuffle service (
>>>>>>>>>> https://github.com/apache/incubator-uniffle).  It can enhance
>>>>>>>>>> the experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>>>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>>>>>>
>>>>>>>>>> If you have interest about more details of Uniffle, you can  see
>>>>>>>>>> [2]
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>>>>>>
>>>>>>>>>> [2]
>>>>>>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>>>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>>>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>>>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for
>>>>>>>>>> Spark 4+
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On the subject of dynamic allocation, is the following message a
>>>>>>>>>> cause for concern when running Spark on k8s?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled
>>>>>>>>>> without a shuffle service.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>>
>>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>>
>>>>>>>>>> London
>>>>>>>>>>
>>>>>>>>>> United Kingdom
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>    view my Linkedin profile
>>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all
>>>>>>>>>> responsibility for any loss, damage or destruction of data or any other
>>>>>>>>>> property which may arise from relying on this email's technical content is
>>>>>>>>>> explicitly disclaimed. The author will in no case be liable for any
>>>>>>>>>> monetary damages arising from such loss, damage or destruction.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>>>>>>> getting the driver going in a timely manner
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>>>>>>
>>>>>>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> This is on spark 3.4.1 with Java 11 both the host running
>>>>>>>>>> spark-submit and the docker itself
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I am not sure how relevant this is to this discussion but it
>>>>>>>>>> looks like a kind of blocker for now. What config params can help here
>>>>>>>>>> and what can be done?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>>
>>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>>
>>>>>>>>>> London
>>>>>>>>>>
>>>>>>>>>> United Kingdom
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>    view my Linkedin profile
>>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all
>>>>>>>>>> responsibility for any loss, damage or destruction of data or any other
>>>>>>>>>> property which may arise from relying on this email's technical content is
>>>>>>>>>> explicitly disclaimed. The author will in no case be liable for any
>>>>>>>>>> monetary damages arising from such loss, damage or destruction.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Oh great point
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Thanks Holden for bringing this up!
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Maybe another thing to think about is how to make dynamic
>>>>>>>>>> allocation more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> So I wondering if there is interesting in revisiting some of how
>>>>>>>>>> Spark is doing it's dynamica allocation for Spark 4+?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Some things that I've been thinking about:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> - Advisory user input (e.g. a way to say after X is done I know I
>>>>>>>>>> need Y where Y might be a bunch of GPU machines)
>>>>>>>>>>
>>>>>>>>>> - Configurable tolerance (e.g. if we have at most Z% over target
>>>>>>>>>> no-op)
>>>>>>>>>>
>>>>>>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>>>>>>
>>>>>>>>>> - Faster executor launches (I'm a little fuzzy on what we can do
>>>>>>>>>> here but, one area for example is we setup and tear down an RPC connection
>>>>>>>>>> to the driver with a blocking call which does seem to have some locking
>>>>>>>>>> inside of the driver at first glance)
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Is this an area other folks are thinking about? Should I make an
>>>>>>>>>> epic we can track ideas in? Or are folks generally happy with today's
>>>>>>>>>> dynamic allocation (or just busy with other things)?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>>
>>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>>
>>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>>
>>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>>
>>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>>
>>>>>>>>>> --
>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>
>>>>>
>>>
>>> --
>>> Regards,
>>> Qian Sun
>>>
>> --
>> Mich Talebzadeh,
>> Distinguished Technologist, Solutions Architect & Engineer
>> London
>> United Kingdom
>>
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>
>
> --
> Regards,
> Qian Sun
>

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Qian Sun <qi...@gmail.com>.
Hi Mich,

ImageCache is an alibaba cloud ECI feature[1]. An image cache is a
cluster-level resource that you can use to accelerate the creation of pods
in different namespaces.

If need to update the spark image, imagecache will be created in the
cluster. And specify pod annotation to use image cache[2].


ref:
1.
https://www.alibabacloud.com/help/en/elastic-container-instance/latest/overview-of-the-image-cache-feature?spm=a2c63.p38356.0.0.19977f3e9Xpq4E#topic-2131957
2.
https://www.alibabacloud.com/help/en/ack/serverless-kubernetes/user-guide/use-image-caches-to-accelerate-the-creation-of-pods#section-3e8-8n8-hdh

On Fri, Aug 25, 2023 at 10:08 PM Mich Talebzadeh <mi...@gmail.com>
wrote:

> Hi Qian,
>
> How in practice have you implemented image caching for the driver and
> executor pods respectively?
>
> Thanks
>
> On Thu, 24 Aug 2023 at 02:44, Qian Sun <qi...@gmail.com> wrote:
>
>> Hi Mich
>>
>> I agree with your opinion that the startup time of the Spark on
>> Kubernetes cluster needs to be improved.
>>
>> Regarding the fetching image directly, I have utilized ImageCache to
>> store the images on the node, eliminating the time required to pull images
>> from a remote repository, which does indeed lead to a reduction in
>> overall time, and the effect becomes more pronounced as the size of the
>> image increases.
>>
>> Additionally, I have observed that the driver pod takes a significant
>> amount of time from running to attempting to create executor pods, with an
>> estimated time expenditure of around 75%. We can also explore optimization
>> options in this area.
>>
>> On Thu, Aug 24, 2023 at 12:58 AM Mich Talebzadeh <
>> mich.talebzadeh@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> On this conversion, one of the issues I brought up was the driver
>>> start-up time. This is especially true in k8s. As spark on k8s is modeled
>>> on Spark on standalone schedler, Spark on k8s consist of a
>>> single-driver pod (as master on standalone”) and a  number of executors
>>> (“workers”). When executed on k8s, the driver and executors are
>>> executed on separate pods
>>> <https://spark.apache.org/docs/latest/running-on-kubernetes.html>. First
>>> the driver pod is launched, then the driver pod itself launches the
>>> executor pods. From my observation, in an auto scaling cluster, the driver
>>> pod may take up to 40 seconds followed by executor pods. This is a
>>> considerable time for customers and it is painfully slow. Can we actually
>>> move away from dependency on standalone mode and try to speed up k8s
>>> cluster formation.
>>>
>>> Another naive question, when the docker image is pulled from the
>>> container registry to the driver itself, this takes finite time. The docker
>>> image for executors could be different from that of the driver
>>> docker image. Since spark-submit presents this at the time of submission,
>>> can we save time by fetching the docker images straight away?
>>>
>>> Thanks
>>>
>>> Mich
>>>
>>>
>>>    view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Tue, 8 Aug 2023 at 18:25, Mich Talebzadeh <mi...@gmail.com>
>>> wrote:
>>>
>>>> Splendid idea. 👍
>>>>
>>>> Mich Talebzadeh,
>>>> Solutions Architect/Engineering Lead
>>>> London
>>>> United Kingdom
>>>>
>>>>
>>>>    view my Linkedin profile
>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>
>>>>
>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca> wrote:
>>>>
>>>>> The driver it’s self is probably another topic, perhaps I’ll make a
>>>>> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>>>>>
>>>>> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>
>>>>>> From my own perspective faster execution time especially with Spark
>>>>>> on tin boxes (Dataproc & EC2) and Spark on k8s is something that customers
>>>>>> often bring up.
>>>>>>
>>>>>> Poor time to onboard with autoscaling seems to be particularly
>>>>>> singled out for heavy ETL jobs that use Spark. I am disappointed to see the
>>>>>> poor performance of Spark on k8s autopilot with timelines starting the
>>>>>> driver itself and moving from Pending to Running phase (Spark 4.3.1 with
>>>>>> Java 11)
>>>>>>
>>>>>> HTH
>>>>>>
>>>>>> Mich Talebzadeh,
>>>>>> Solutions Architect/Engineering Lead
>>>>>> London
>>>>>> United Kingdom
>>>>>>
>>>>>>
>>>>>>    view my Linkedin profile
>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>
>>>>>>
>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>> arise from relying on this email's technical content is explicitly
>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>> arising from such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>>>>>
>>>>>>> +1 to enhancements in DEA. Long time due!
>>>>>>>
>>>>>>> There were a few things that I was thinking along the same lines for
>>>>>>> some time now(few overlap with @holden 's points)
>>>>>>> 1. How to reduce wastage on the RM side? Sometimes the driver asks
>>>>>>> for some units of resources. But when RM provisions them, the driver
>>>>>>> cancels it.
>>>>>>> 2. How to make the resource available when it is needed.
>>>>>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer to
>>>>>>> choose between cost and runtime. Sometimes developers might be ok to pay
>>>>>>> higher costs for faster execution.
>>>>>>> 4. Stitch resource profile choices into query execution.
>>>>>>> 5. Allow different DEA algo to be chosen for different queries
>>>>>>> within the same spark application.
>>>>>>> 6. Fall back to default algo, when things go haywire!
>>>>>>>
>>>>>>> Model-based learning would be awesome.
>>>>>>> These can be fine-tuned with some tools like sparklens.
>>>>>>>
>>>>>>> I am aware of a few experiments carried out in this area by
>>>>>>> my friends in this domain. One lesson we had was, it is hard to have a
>>>>>>> generic algorithm that worked for all cases.
>>>>>>>
>>>>>>> Regards
>>>>>>> kalyan.
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>
>>>>>>>> Thanks for pointing out this feature to me. I will have a look when
>>>>>>>> I get there.
>>>>>>>>
>>>>>>>> Mich Talebzadeh,
>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>> London
>>>>>>>> United Kingdom
>>>>>>>>
>>>>>>>>
>>>>>>>>    view my Linkedin profile
>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>
>>>>>>>>
>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:
>>>>>>>>
>>>>>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in the `
>>>>>>>>> ShuffleDriverComponents` which indicate whether writing  shuffle
>>>>>>>>> data to a distributed filesystem or persisting it in a remote shuffle
>>>>>>>>> service.
>>>>>>>>>
>>>>>>>>> Uniffle is a general purpose remote shuffle service (
>>>>>>>>> https://github.com/apache/incubator-uniffle).  It can enhance the
>>>>>>>>> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>>>>>
>>>>>>>>> If you have interest about more details of Uniffle, you can  see
>>>>>>>>> [2]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>>>>>
>>>>>>>>> [2]
>>>>>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for
>>>>>>>>> Spark 4+
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On the subject of dynamic allocation, is the following message a
>>>>>>>>> cause for concern when running Spark on k8s?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled
>>>>>>>>> without a shuffle service.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>
>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>
>>>>>>>>> London
>>>>>>>>>
>>>>>>>>> United Kingdom
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    view my Linkedin profile
>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>>>>>> getting the driver going in a timely manner
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>>>>>
>>>>>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> This is on spark 3.4.1 with Java 11 both the host running
>>>>>>>>> spark-submit and the docker itself
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I am not sure how relevant this is to this discussion but it looks
>>>>>>>>> like a kind of blocker for now. What config params can help here and what
>>>>>>>>> can be done?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Mich Talebzadeh,
>>>>>>>>>
>>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>>
>>>>>>>>> London
>>>>>>>>>
>>>>>>>>> United Kingdom
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>    view my Linkedin profile
>>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Oh great point
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Thanks Holden for bringing this up!
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Maybe another thing to think about is how to make dynamic
>>>>>>>>> allocation more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> So I wondering if there is interesting in revisiting some of how
>>>>>>>>> Spark is doing it's dynamica allocation for Spark 4+?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Some things that I've been thinking about:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> - Advisory user input (e.g. a way to say after X is done I know I
>>>>>>>>> need Y where Y might be a bunch of GPU machines)
>>>>>>>>>
>>>>>>>>> - Configurable tolerance (e.g. if we have at most Z% over target
>>>>>>>>> no-op)
>>>>>>>>>
>>>>>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>>>>>
>>>>>>>>> - Faster executor launches (I'm a little fuzzy on what we can do
>>>>>>>>> here but, one area for example is we setup and tear down an RPC connection
>>>>>>>>> to the driver with a blocking call which does seem to have some locking
>>>>>>>>> inside of the driver at first glance)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Is this an area other folks are thinking about? Should I make an
>>>>>>>>> epic we can track ideas in? Or are folks generally happy with today's
>>>>>>>>> dynamic allocation (or just busy with other things)?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>
>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>
>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>>
>>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>>
>>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>>
>>>>>>>>> --
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>
>>>>
>>
>> --
>> Regards,
>> Qian Sun
>>
> --
> Mich Talebzadeh,
> Distinguished Technologist, Solutions Architect & Engineer
> London
> United Kingdom
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>


-- 
Regards,
Qian Sun

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi Qian,

How in practice have you implemented image caching for the driver and
executor pods respectively?

Thanks

On Thu, 24 Aug 2023 at 02:44, Qian Sun <qi...@gmail.com> wrote:

> Hi Mich
>
> I agree with your opinion that the startup time of the Spark on Kubernetes
> cluster needs to be improved.
>
> Regarding the fetching image directly, I have utilized ImageCache to store
> the images on the node, eliminating the time required to pull images from a
> remote repository, which does indeed lead to a reduction in overall time,
> and the effect becomes more pronounced as the size of the image increases.
>
>
> Additionally, I have observed that the driver pod takes a significant
> amount of time from running to attempting to create executor pods, with an
> estimated time expenditure of around 75%. We can also explore optimization
> options in this area.
>
> On Thu, Aug 24, 2023 at 12:58 AM Mich Talebzadeh <
> mich.talebzadeh@gmail.com> wrote:
>
>> Hi all,
>>
>> On this conversion, one of the issues I brought up was the driver
>> start-up time. This is especially true in k8s. As spark on k8s is modeled
>> on Spark on standalone schedler, Spark on k8s consist of a single-driver
>> pod (as master on standalone”) and a  number of executors (“workers”). When executed
>> on k8s, the driver and executors are executed on separate pods
>> <https://spark.apache.org/docs/latest/running-on-kubernetes.html>. First
>> the driver pod is launched, then the driver pod itself launches the
>> executor pods. From my observation, in an auto scaling cluster, the driver
>> pod may take up to 40 seconds followed by executor pods. This is a
>> considerable time for customers and it is painfully slow. Can we actually
>> move away from dependency on standalone mode and try to speed up k8s
>> cluster formation.
>>
>> Another naive question, when the docker image is pulled from the
>> container registry to the driver itself, this takes finite time. The docker
>> image for executors could be different from that of the driver
>> docker image. Since spark-submit presents this at the time of submission,
>> can we save time by fetching the docker images straight away?
>>
>> Thanks
>>
>> Mich
>>
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Tue, 8 Aug 2023 at 18:25, Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>
>>> Splendid idea. 👍
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>    view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca> wrote:
>>>
>>>> The driver it’s self is probably another topic, perhaps I’ll make a
>>>> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>>>>
>>>> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com> wrote:
>>>>
>>>>> From my own perspective faster execution time especially with Spark on
>>>>> tin boxes (Dataproc & EC2) and Spark on k8s is something that customers
>>>>> often bring up.
>>>>>
>>>>> Poor time to onboard with autoscaling seems to be particularly singled
>>>>> out for heavy ETL jobs that use Spark. I am disappointed to see the poor
>>>>> performance of Spark on k8s autopilot with timelines starting the driver
>>>>> itself and moving from Pending to Running phase (Spark 4.3.1 with Java 11)
>>>>>
>>>>> HTH
>>>>>
>>>>> Mich Talebzadeh,
>>>>> Solutions Architect/Engineering Lead
>>>>> London
>>>>> United Kingdom
>>>>>
>>>>>
>>>>>    view my Linkedin profile
>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>
>>>>>
>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>>>>
>>>>>> +1 to enhancements in DEA. Long time due!
>>>>>>
>>>>>> There were a few things that I was thinking along the same lines for
>>>>>> some time now(few overlap with @holden 's points)
>>>>>> 1. How to reduce wastage on the RM side? Sometimes the driver asks
>>>>>> for some units of resources. But when RM provisions them, the driver
>>>>>> cancels it.
>>>>>> 2. How to make the resource available when it is needed.
>>>>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer to
>>>>>> choose between cost and runtime. Sometimes developers might be ok to pay
>>>>>> higher costs for faster execution.
>>>>>> 4. Stitch resource profile choices into query execution.
>>>>>> 5. Allow different DEA algo to be chosen for different queries within
>>>>>> the same spark application.
>>>>>> 6. Fall back to default algo, when things go haywire!
>>>>>>
>>>>>> Model-based learning would be awesome.
>>>>>> These can be fine-tuned with some tools like sparklens.
>>>>>>
>>>>>> I am aware of a few experiments carried out in this area by
>>>>>> my friends in this domain. One lesson we had was, it is hard to have a
>>>>>> generic algorithm that worked for all cases.
>>>>>>
>>>>>> Regards
>>>>>> kalyan.
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>
>>>>>>> Thanks for pointing out this feature to me. I will have a look when
>>>>>>> I get there.
>>>>>>>
>>>>>>> Mich Talebzadeh,
>>>>>>> Solutions Architect/Engineering Lead
>>>>>>> London
>>>>>>> United Kingdom
>>>>>>>
>>>>>>>
>>>>>>>    view my Linkedin profile
>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>
>>>>>>>
>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>> arising from such loss, damage or destruction.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:
>>>>>>>
>>>>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in the `
>>>>>>>> ShuffleDriverComponents` which indicate whether writing  shuffle
>>>>>>>> data to a distributed filesystem or persisting it in a remote shuffle
>>>>>>>> service.
>>>>>>>>
>>>>>>>> Uniffle is a general purpose remote shuffle service (
>>>>>>>> https://github.com/apache/incubator-uniffle).  It can enhance the
>>>>>>>> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>>>>
>>>>>>>> If you have interest about more details of Uniffle, you can  see [2]
>>>>>>>>
>>>>>>>>
>>>>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>>>>
>>>>>>>> [2]
>>>>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for Spark
>>>>>>>> 4+
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On the subject of dynamic allocation, is the following message a
>>>>>>>> cause for concern when running Spark on k8s?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled
>>>>>>>> without a shuffle service.
>>>>>>>>
>>>>>>>>
>>>>>>>> Mich Talebzadeh,
>>>>>>>>
>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>
>>>>>>>> London
>>>>>>>>
>>>>>>>> United Kingdom
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>    view my Linkedin profile
>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>>>>> getting the driver going in a timely manner
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>>>>
>>>>>>>>
>>>>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>>>>
>>>>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> This is on spark 3.4.1 with Java 11 both the host running
>>>>>>>> spark-submit and the docker itself
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> I am not sure how relevant this is to this discussion but it looks
>>>>>>>> like a kind of blocker for now. What config params can help here and what
>>>>>>>> can be done?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Mich Talebzadeh,
>>>>>>>>
>>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>>
>>>>>>>> London
>>>>>>>>
>>>>>>>> United Kingdom
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>    view my Linkedin profile
>>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>>> arising from such loss, damage or destruction.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Oh great point
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:
>>>>>>>>
>>>>>>>> Thanks Holden for bringing this up!
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Maybe another thing to think about is how to make dynamic
>>>>>>>> allocation more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> So I wondering if there is interesting in revisiting some of how
>>>>>>>> Spark is doing it's dynamica allocation for Spark 4+?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Some things that I've been thinking about:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> - Advisory user input (e.g. a way to say after X is done I know I
>>>>>>>> need Y where Y might be a bunch of GPU machines)
>>>>>>>>
>>>>>>>> - Configurable tolerance (e.g. if we have at most Z% over target
>>>>>>>> no-op)
>>>>>>>>
>>>>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>>>>
>>>>>>>> - Faster executor launches (I'm a little fuzzy on what we can do
>>>>>>>> here but, one area for example is we setup and tear down an RPC connection
>>>>>>>> to the driver with a blocking call which does seem to have some locking
>>>>>>>> inside of the driver at first glance)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Is this an area other folks are thinking about? Should I make an
>>>>>>>> epic we can track ideas in? Or are folks generally happy with today's
>>>>>>>> dynamic allocation (or just busy with other things)?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>
>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>
>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>>
>>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>>
>>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>>
>>>>>>>> --
>>>> Twitter: https://twitter.com/holdenkarau
>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>
>>>
>
> --
> Regards,
> Qian Sun
>
-- 
Mich Talebzadeh,
Distinguished Technologist, Solutions Architect & Engineer
London
United Kingdom


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Qian Sun <qi...@gmail.com>.
Hi Mich

I agree with your opinion that the startup time of the Spark on Kubernetes
cluster needs to be improved.

Regarding the fetching image directly, I have utilized ImageCache to store
the images on the node, eliminating the time required to pull images from a
remote repository, which does indeed lead to a reduction in overall time,
and the effect becomes more pronounced as the size of the image increases.

Additionally, I have observed that the driver pod takes a significant
amount of time from running to attempting to create executor pods, with an
estimated time expenditure of around 75%. We can also explore optimization
options in this area.

On Thu, Aug 24, 2023 at 12:58 AM Mich Talebzadeh <mi...@gmail.com>
wrote:

> Hi all,
>
> On this conversion, one of the issues I brought up was the driver start-up
> time. This is especially true in k8s. As spark on k8s is modeled on Spark
> on standalone schedler, Spark on k8s consist of a single-driver pod (as
> master on standalone”) and a  number of executors (“workers”). When executed
> on k8s, the driver and executors are executed on separate pods
> <https://spark.apache.org/docs/latest/running-on-kubernetes.html>. First
> the driver pod is launched, then the driver pod itself launches the
> executor pods. From my observation, in an auto scaling cluster, the driver
> pod may take up to 40 seconds followed by executor pods. This is a
> considerable time for customers and it is painfully slow. Can we actually
> move away from dependency on standalone mode and try to speed up k8s
> cluster formation.
>
> Another naive question, when the docker image is pulled from the container
> registry to the driver itself, this takes finite time. The docker image for
> executors could be different from that of the driver docker image. Since
> spark-submit presents this at the time of submission, can we save time by
> fetching the docker images straight away?
>
> Thanks
>
> Mich
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Tue, 8 Aug 2023 at 18:25, Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> Splendid idea. 👍
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca> wrote:
>>
>>> The driver it’s self is probably another topic, perhaps I’ll make a
>>> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>>>
>>> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com> wrote:
>>>
>>>> From my own perspective faster execution time especially with Spark on
>>>> tin boxes (Dataproc & EC2) and Spark on k8s is something that customers
>>>> often bring up.
>>>>
>>>> Poor time to onboard with autoscaling seems to be particularly singled
>>>> out for heavy ETL jobs that use Spark. I am disappointed to see the poor
>>>> performance of Spark on k8s autopilot with timelines starting the driver
>>>> itself and moving from Pending to Running phase (Spark 4.3.1 with Java 11)
>>>>
>>>> HTH
>>>>
>>>> Mich Talebzadeh,
>>>> Solutions Architect/Engineering Lead
>>>> London
>>>> United Kingdom
>>>>
>>>>
>>>>    view my Linkedin profile
>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>
>>>>
>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>>>
>>>>> +1 to enhancements in DEA. Long time due!
>>>>>
>>>>> There were a few things that I was thinking along the same lines for
>>>>> some time now(few overlap with @holden 's points)
>>>>> 1. How to reduce wastage on the RM side? Sometimes the driver asks for
>>>>> some units of resources. But when RM provisions them, the driver cancels
>>>>> it.
>>>>> 2. How to make the resource available when it is needed.
>>>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer to
>>>>> choose between cost and runtime. Sometimes developers might be ok to pay
>>>>> higher costs for faster execution.
>>>>> 4. Stitch resource profile choices into query execution.
>>>>> 5. Allow different DEA algo to be chosen for different queries within
>>>>> the same spark application.
>>>>> 6. Fall back to default algo, when things go haywire!
>>>>>
>>>>> Model-based learning would be awesome.
>>>>> These can be fine-tuned with some tools like sparklens.
>>>>>
>>>>> I am aware of a few experiments carried out in this area by my friends
>>>>> in this domain. One lesson we had was, it is hard to have a generic
>>>>> algorithm that worked for all cases.
>>>>>
>>>>> Regards
>>>>> kalyan.
>>>>>
>>>>>
>>>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>
>>>>>> Thanks for pointing out this feature to me. I will have a look when I
>>>>>> get there.
>>>>>>
>>>>>> Mich Talebzadeh,
>>>>>> Solutions Architect/Engineering Lead
>>>>>> London
>>>>>> United Kingdom
>>>>>>
>>>>>>
>>>>>>    view my Linkedin profile
>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>
>>>>>>
>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>> arise from relying on this email's technical content is explicitly
>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>> arising from such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:
>>>>>>
>>>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in the `
>>>>>>> ShuffleDriverComponents` which indicate whether writing  shuffle
>>>>>>> data to a distributed filesystem or persisting it in a remote shuffle
>>>>>>> service.
>>>>>>>
>>>>>>> Uniffle is a general purpose remote shuffle service (
>>>>>>> https://github.com/apache/incubator-uniffle).  It can enhance the
>>>>>>> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>>>
>>>>>>> If you have interest about more details of Uniffle, you can  see [2]
>>>>>>>
>>>>>>>
>>>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>>>
>>>>>>> [2]
>>>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for Spark
>>>>>>> 4+
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On the subject of dynamic allocation, is the following message a
>>>>>>> cause for concern when running Spark on k8s?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled
>>>>>>> without a shuffle service.
>>>>>>>
>>>>>>>
>>>>>>> Mich Talebzadeh,
>>>>>>>
>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>
>>>>>>> London
>>>>>>>
>>>>>>> United Kingdom
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>    view my Linkedin profile
>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>> arising from such loss, damage or destruction.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>>>> getting the driver going in a timely manner
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>>>
>>>>>>>
>>>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>>>
>>>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> This is on spark 3.4.1 with Java 11 both the host running
>>>>>>> spark-submit and the docker itself
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I am not sure how relevant this is to this discussion but it looks
>>>>>>> like a kind of blocker for now. What config params can help here and what
>>>>>>> can be done?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Mich Talebzadeh,
>>>>>>>
>>>>>>> Solutions Architect/Engineering Lead
>>>>>>>
>>>>>>> London
>>>>>>>
>>>>>>> United Kingdom
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>    view my Linkedin profile
>>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>>> arise from relying on this email's technical content is explicitly
>>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>>> arising from such loss, damage or destruction.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Oh great point
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:
>>>>>>>
>>>>>>> Thanks Holden for bringing this up!
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Maybe another thing to think about is how to make dynamic allocation
>>>>>>> more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>>>>>> wrote:
>>>>>>>
>>>>>>> So I wondering if there is interesting in revisiting some of how
>>>>>>> Spark is doing it's dynamica allocation for Spark 4+?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Some things that I've been thinking about:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> - Advisory user input (e.g. a way to say after X is done I know I
>>>>>>> need Y where Y might be a bunch of GPU machines)
>>>>>>>
>>>>>>> - Configurable tolerance (e.g. if we have at most Z% over target
>>>>>>> no-op)
>>>>>>>
>>>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>>>
>>>>>>> - Faster executor launches (I'm a little fuzzy on what we can do
>>>>>>> here but, one area for example is we setup and tear down an RPC connection
>>>>>>> to the driver with a blocking call which does seem to have some locking
>>>>>>> inside of the driver at first glance)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Is this an area other folks are thinking about? Should I make an
>>>>>>> epic we can track ideas in? Or are folks generally happy with today's
>>>>>>> dynamic allocation (or just busy with other things)?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>
>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>
>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>>
>>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>>
>>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>>
>>>>>>> --
>>> Twitter: https://twitter.com/holdenkarau
>>> Books (Learning Spark, High Performance Spark, etc.):
>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>
>>

-- 
Regards,
Qian Sun

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi all,

On this conversion, one of the issues I brought up was the driver start-up
time. This is especially true in k8s. As spark on k8s is modeled on Spark
on standalone schedler, Spark on k8s consist of a single-driver pod (as
master on standalone”) and a  number of executors (“workers”). When executed
on k8s, the driver and executors are executed on separate pods
<https://spark.apache.org/docs/latest/running-on-kubernetes.html>. First
the driver pod is launched, then the driver pod itself launches the
executor pods. From my observation, in an auto scaling cluster, the driver
pod may take up to 40 seconds followed by executor pods. This is a
considerable time for customers and it is painfully slow. Can we actually
move away from dependency on standalone mode and try to speed up k8s
cluster formation.

Another naive question, when the docker image is pulled from the container
registry to the driver itself, this takes finite time. The docker image for
executors could be different from that of the driver docker image. Since
spark-submit presents this at the time of submission, can we save time by
fetching the docker images straight away?

Thanks

Mich


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Tue, 8 Aug 2023 at 18:25, Mich Talebzadeh <mi...@gmail.com>
wrote:

> Splendid idea. 👍
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca> wrote:
>
>> The driver it’s self is probably another topic, perhaps I’ll make a
>> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>>
>> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <
>> mich.talebzadeh@gmail.com> wrote:
>>
>>> From my own perspective faster execution time especially with Spark on
>>> tin boxes (Dataproc & EC2) and Spark on k8s is something that customers
>>> often bring up.
>>>
>>> Poor time to onboard with autoscaling seems to be particularly singled
>>> out for heavy ETL jobs that use Spark. I am disappointed to see the poor
>>> performance of Spark on k8s autopilot with timelines starting the driver
>>> itself and moving from Pending to Running phase (Spark 4.3.1 with Java 11)
>>>
>>> HTH
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>    view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>>
>>>> +1 to enhancements in DEA. Long time due!
>>>>
>>>> There were a few things that I was thinking along the same lines for
>>>> some time now(few overlap with @holden 's points)
>>>> 1. How to reduce wastage on the RM side? Sometimes the driver asks for
>>>> some units of resources. But when RM provisions them, the driver cancels
>>>> it.
>>>> 2. How to make the resource available when it is needed.
>>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer to
>>>> choose between cost and runtime. Sometimes developers might be ok to pay
>>>> higher costs for faster execution.
>>>> 4. Stitch resource profile choices into query execution.
>>>> 5. Allow different DEA algo to be chosen for different queries within
>>>> the same spark application.
>>>> 6. Fall back to default algo, when things go haywire!
>>>>
>>>> Model-based learning would be awesome.
>>>> These can be fine-tuned with some tools like sparklens.
>>>>
>>>> I am aware of a few experiments carried out in this area by my friends
>>>> in this domain. One lesson we had was, it is hard to have a generic
>>>> algorithm that worked for all cases.
>>>>
>>>> Regards
>>>> kalyan.
>>>>
>>>>
>>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com> wrote:
>>>>
>>>>> Thanks for pointing out this feature to me. I will have a look when I
>>>>> get there.
>>>>>
>>>>> Mich Talebzadeh,
>>>>> Solutions Architect/Engineering Lead
>>>>> London
>>>>> United Kingdom
>>>>>
>>>>>
>>>>>    view my Linkedin profile
>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>
>>>>>
>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:
>>>>>
>>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in the `
>>>>>> ShuffleDriverComponents` which indicate whether writing  shuffle
>>>>>> data to a distributed filesystem or persisting it in a remote shuffle
>>>>>> service.
>>>>>>
>>>>>> Uniffle is a general purpose remote shuffle service (
>>>>>> https://github.com/apache/incubator-uniffle).  It can enhance the
>>>>>> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>>
>>>>>> If you have interest about more details of Uniffle, you can  see [2]
>>>>>>
>>>>>>
>>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>>
>>>>>> [2]
>>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>>
>>>>>>
>>>>>>
>>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for Spark 4+
>>>>>>
>>>>>>
>>>>>>
>>>>>> On the subject of dynamic allocation, is the following message a
>>>>>> cause for concern when running Spark on k8s?
>>>>>>
>>>>>>
>>>>>>
>>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled without
>>>>>> a shuffle service.
>>>>>>
>>>>>>
>>>>>> Mich Talebzadeh,
>>>>>>
>>>>>> Solutions Architect/Engineering Lead
>>>>>>
>>>>>> London
>>>>>>
>>>>>> United Kingdom
>>>>>>
>>>>>>
>>>>>>
>>>>>>    view my Linkedin profile
>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>
>>>>>>
>>>>>>
>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>> arise from relying on this email's technical content is explicitly
>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>> arising from such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>
>>>>>>
>>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>>> getting the driver going in a timely manner
>>>>>>
>>>>>>
>>>>>>
>>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>>
>>>>>>
>>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>>
>>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>>
>>>>>>
>>>>>>
>>>>>> This is on spark 3.4.1 with Java 11 both the host running
>>>>>> spark-submit and the docker itself
>>>>>>
>>>>>>
>>>>>>
>>>>>> I am not sure how relevant this is to this discussion but it looks
>>>>>> like a kind of blocker for now. What config params can help here and what
>>>>>> can be done?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>> Mich Talebzadeh,
>>>>>>
>>>>>> Solutions Architect/Engineering Lead
>>>>>>
>>>>>> London
>>>>>>
>>>>>> United Kingdom
>>>>>>
>>>>>>
>>>>>>
>>>>>>    view my Linkedin profile
>>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>>
>>>>>>
>>>>>>
>>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>> arise from relying on this email's technical content is explicitly
>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>> arising from such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>>> wrote:
>>>>>>
>>>>>> Oh great point
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:
>>>>>>
>>>>>> Thanks Holden for bringing this up!
>>>>>>
>>>>>>
>>>>>>
>>>>>> Maybe another thing to think about is how to make dynamic allocation
>>>>>> more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>>>>> wrote:
>>>>>>
>>>>>> So I wondering if there is interesting in revisiting some of how
>>>>>> Spark is doing it's dynamica allocation for Spark 4+?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Some things that I've been thinking about:
>>>>>>
>>>>>>
>>>>>>
>>>>>> - Advisory user input (e.g. a way to say after X is done I know I
>>>>>> need Y where Y might be a bunch of GPU machines)
>>>>>>
>>>>>> - Configurable tolerance (e.g. if we have at most Z% over target
>>>>>> no-op)
>>>>>>
>>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>>
>>>>>> - Faster executor launches (I'm a little fuzzy on what we can do here
>>>>>> but, one area for example is we setup and tear down an RPC connection to
>>>>>> the driver with a blocking call which does seem to have some locking inside
>>>>>> of the driver at first glance)
>>>>>>
>>>>>>
>>>>>>
>>>>>> Is this an area other folks are thinking about? Should I make an epic
>>>>>> we can track ideas in? Or are folks generally happy with today's dynamic
>>>>>> allocation (or just busy with other things)?
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>
>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>
>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>
>>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>>
>>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>>
>>>>>> --
>> Twitter: https://twitter.com/holdenkarau
>> Books (Learning Spark, High Performance Spark, etc.):
>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>
>

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Mich Talebzadeh <mi...@gmail.com>.
Splendid idea. 👍

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Tue, 8 Aug 2023 at 18:10, Holden Karau <ho...@pigscanfly.ca> wrote:

> The driver it’s self is probably another topic, perhaps I’ll make a
> “faster spark star time” JIRA and a DA JIRA and we can explore both.
>
> On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> From my own perspective faster execution time especially with Spark on
>> tin boxes (Dataproc & EC2) and Spark on k8s is something that customers
>> often bring up.
>>
>> Poor time to onboard with autoscaling seems to be particularly singled
>> out for heavy ETL jobs that use Spark. I am disappointed to see the poor
>> performance of Spark on k8s autopilot with timelines starting the driver
>> itself and moving from Pending to Running phase (Spark 4.3.1 with Java 11)
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>>
>>> +1 to enhancements in DEA. Long time due!
>>>
>>> There were a few things that I was thinking along the same lines for
>>> some time now(few overlap with @holden 's points)
>>> 1. How to reduce wastage on the RM side? Sometimes the driver asks for
>>> some units of resources. But when RM provisions them, the driver cancels
>>> it.
>>> 2. How to make the resource available when it is needed.
>>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer to
>>> choose between cost and runtime. Sometimes developers might be ok to pay
>>> higher costs for faster execution.
>>> 4. Stitch resource profile choices into query execution.
>>> 5. Allow different DEA algo to be chosen for different queries within
>>> the same spark application.
>>> 6. Fall back to default algo, when things go haywire!
>>>
>>> Model-based learning would be awesome.
>>> These can be fine-tuned with some tools like sparklens.
>>>
>>> I am aware of a few experiments carried out in this area by my friends
>>> in this domain. One lesson we had was, it is hard to have a generic
>>> algorithm that worked for all cases.
>>>
>>> Regards
>>> kalyan.
>>>
>>>
>>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com> wrote:
>>>
>>>> Thanks for pointing out this feature to me. I will have a look when I
>>>> get there.
>>>>
>>>> Mich Talebzadeh,
>>>> Solutions Architect/Engineering Lead
>>>> London
>>>> United Kingdom
>>>>
>>>>
>>>>    view my Linkedin profile
>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>
>>>>
>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:
>>>>
>>>>> Spark 3.5 have added an method `supportsReliableStorage`  in the `
>>>>> ShuffleDriverComponents` which indicate whether writing  shuffle data
>>>>> to a distributed filesystem or persisting it in a remote shuffle service.
>>>>>
>>>>> Uniffle is a general purpose remote shuffle service (
>>>>> https://github.com/apache/incubator-uniffle).  It can enhance the
>>>>> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>>
>>>>> If you have interest about more details of Uniffle, you can  see [2]
>>>>>
>>>>>
>>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>>
>>>>> [2]
>>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>>
>>>>>
>>>>>
>>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>>> *日期**: *2023年8月8日 星期二 06:53
>>>>> *抄送**: *dev <de...@spark.apache.org>
>>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for Spark 4+
>>>>>
>>>>>
>>>>>
>>>>> On the subject of dynamic allocation, is the following message a cause
>>>>> for concern when running Spark on k8s?
>>>>>
>>>>>
>>>>>
>>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled without
>>>>> a shuffle service.
>>>>>
>>>>>
>>>>> Mich Talebzadeh,
>>>>>
>>>>> Solutions Architect/Engineering Lead
>>>>>
>>>>> London
>>>>>
>>>>> United Kingdom
>>>>>
>>>>>
>>>>>
>>>>>    view my Linkedin profile
>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>
>>>>>
>>>>>
>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>>
>>>>>
>>>>> From what I have seen spark on a serverless cluster has hard up
>>>>> getting the driver going in a timely manner
>>>>>
>>>>>
>>>>>
>>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>>
>>>>>
>>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>>
>>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>>
>>>>>
>>>>>
>>>>> This is on spark 3.4.1 with Java 11 both the host running spark-submit
>>>>> and the docker itself
>>>>>
>>>>>
>>>>>
>>>>> I am not sure how relevant this is to this discussion but it looks
>>>>> like a kind of blocker for now. What config params can help here and what
>>>>> can be done?
>>>>>
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>> Mich Talebzadeh,
>>>>>
>>>>> Solutions Architect/Engineering Lead
>>>>>
>>>>> London
>>>>>
>>>>> United Kingdom
>>>>>
>>>>>
>>>>>
>>>>>    view my Linkedin profile
>>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>>
>>>>>
>>>>>
>>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>
>>>>> wrote:
>>>>>
>>>>> Oh great point
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:
>>>>>
>>>>> Thanks Holden for bringing this up!
>>>>>
>>>>>
>>>>>
>>>>> Maybe another thing to think about is how to make dynamic allocation
>>>>> more friendly with Kubernetes and disaggregated shuffle storage?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>>>> wrote:
>>>>>
>>>>> So I wondering if there is interesting in revisiting some of how Spark
>>>>> is doing it's dynamica allocation for Spark 4+?
>>>>>
>>>>>
>>>>>
>>>>> Some things that I've been thinking about:
>>>>>
>>>>>
>>>>>
>>>>> - Advisory user input (e.g. a way to say after X is done I know I need
>>>>> Y where Y might be a bunch of GPU machines)
>>>>>
>>>>> - Configurable tolerance (e.g. if we have at most Z% over target no-op)
>>>>>
>>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>>
>>>>> - Faster executor launches (I'm a little fuzzy on what we can do here
>>>>> but, one area for example is we setup and tear down an RPC connection to
>>>>> the driver with a blocking call which does seem to have some locking inside
>>>>> of the driver at first glance)
>>>>>
>>>>>
>>>>>
>>>>> Is this an area other folks are thinking about? Should I make an epic
>>>>> we can track ideas in? Or are folks generally happy with today's dynamic
>>>>> allocation (or just busy with other things)?
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>
>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>
>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>
>>>>> --
>>>>>
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>
>>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>>
>>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>>
>>>>> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.):
> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Holden Karau <ho...@pigscanfly.ca>.
The driver it’s self is probably another topic, perhaps I’ll make a “faster
spark star time” JIRA and a DA JIRA and we can explore both.

On Tue, Aug 8, 2023 at 10:07 AM Mich Talebzadeh <mi...@gmail.com>
wrote:

> From my own perspective faster execution time especially with Spark on tin
> boxes (Dataproc & EC2) and Spark on k8s is something that customers often
> bring up.
>
> Poor time to onboard with autoscaling seems to be particularly singled out
> for heavy ETL jobs that use Spark. I am disappointed to see the poor
> performance of Spark on k8s autopilot with timelines starting the driver
> itself and moving from Pending to Running phase (Spark 4.3.1 with Java 11)
>
> HTH
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:
>
>> +1 to enhancements in DEA. Long time due!
>>
>> There were a few things that I was thinking along the same lines for some
>> time now(few overlap with @holden 's points)
>> 1. How to reduce wastage on the RM side? Sometimes the driver asks for
>> some units of resources. But when RM provisions them, the driver cancels
>> it.
>> 2. How to make the resource available when it is needed.
>> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer to
>> choose between cost and runtime. Sometimes developers might be ok to pay
>> higher costs for faster execution.
>> 4. Stitch resource profile choices into query execution.
>> 5. Allow different DEA algo to be chosen for different queries within the
>> same spark application.
>> 6. Fall back to default algo, when things go haywire!
>>
>> Model-based learning would be awesome.
>> These can be fine-tuned with some tools like sparklens.
>>
>> I am aware of a few experiments carried out in this area by my friends in
>> this domain. One lesson we had was, it is hard to have a generic algorithm
>> that worked for all cases.
>>
>> Regards
>> kalyan.
>>
>>
>> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>
>>> Thanks for pointing out this feature to me. I will have a look when I
>>> get there.
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>    view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:
>>>
>>>> Spark 3.5 have added an method `supportsReliableStorage`  in the `
>>>> ShuffleDriverComponents` which indicate whether writing  shuffle data
>>>> to a distributed filesystem or persisting it in a remote shuffle service.
>>>>
>>>> Uniffle is a general purpose remote shuffle service (
>>>> https://github.com/apache/incubator-uniffle).  It can enhance the
>>>> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>>
>>>> If you have interest about more details of Uniffle, you can  see [2]
>>>>
>>>>
>>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>>
>>>> [2]
>>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>>
>>>>
>>>>
>>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>>> *日期**: *2023年8月8日 星期二 06:53
>>>> *抄送**: *dev <de...@spark.apache.org>
>>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for Spark 4+
>>>>
>>>>
>>>>
>>>> On the subject of dynamic allocation, is the following message a cause
>>>> for concern when running Spark on k8s?
>>>>
>>>>
>>>>
>>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled without a
>>>> shuffle service.
>>>>
>>>>
>>>> Mich Talebzadeh,
>>>>
>>>> Solutions Architect/Engineering Lead
>>>>
>>>> London
>>>>
>>>> United Kingdom
>>>>
>>>>
>>>>
>>>>    view my Linkedin profile
>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>
>>>>
>>>>
>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <mi...@gmail.com>
>>>> wrote:
>>>>
>>>>
>>>>
>>>> Hi,
>>>>
>>>>
>>>>
>>>> From what I have seen spark on a serverless cluster has hard up getting
>>>> the driver going in a timely manner
>>>>
>>>>
>>>>
>>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>>
>>>>
>>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>>
>>>>               autopilot.gke.io/warden-version: 2.7.41
>>>>
>>>>
>>>>
>>>> This is on spark 3.4.1 with Java 11 both the host running spark-submit
>>>> and the docker itself
>>>>
>>>>
>>>>
>>>> I am not sure how relevant this is to this discussion but it looks like
>>>> a kind of blocker for now. What config params can help here and what can be
>>>> done?
>>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>> Mich Talebzadeh,
>>>>
>>>> Solutions Architect/Engineering Lead
>>>>
>>>> London
>>>>
>>>> United Kingdom
>>>>
>>>>
>>>>
>>>>    view my Linkedin profile
>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>>
>>>>
>>>>
>>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>>
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca> wrote:
>>>>
>>>> Oh great point
>>>>
>>>>
>>>>
>>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:
>>>>
>>>> Thanks Holden for bringing this up!
>>>>
>>>>
>>>>
>>>> Maybe another thing to think about is how to make dynamic allocation
>>>> more friendly with Kubernetes and disaggregated shuffle storage?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>>> wrote:
>>>>
>>>> So I wondering if there is interesting in revisiting some of how Spark
>>>> is doing it's dynamica allocation for Spark 4+?
>>>>
>>>>
>>>>
>>>> Some things that I've been thinking about:
>>>>
>>>>
>>>>
>>>> - Advisory user input (e.g. a way to say after X is done I know I need
>>>> Y where Y might be a bunch of GPU machines)
>>>>
>>>> - Configurable tolerance (e.g. if we have at most Z% over target no-op)
>>>>
>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>>
>>>> - Faster executor launches (I'm a little fuzzy on what we can do here
>>>> but, one area for example is we setup and tear down an RPC connection to
>>>> the driver with a blocking call which does seem to have some locking inside
>>>> of the driver at first glance)
>>>>
>>>>
>>>>
>>>> Is this an area other folks are thinking about? Should I make an epic
>>>> we can track ideas in? Or are folks generally happy with today's dynamic
>>>> allocation (or just busy with other things)?
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>
>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>
>>>> --
>>>>
>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>>
>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>
>>>> --
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.):
https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Mich Talebzadeh <mi...@gmail.com>.
From my own perspective faster execution time especially with Spark on tin
boxes (Dataproc & EC2) and Spark on k8s is something that customers often
bring up.

Poor time to onboard with autoscaling seems to be particularly singled out
for heavy ETL jobs that use Spark. I am disappointed to see the poor
performance of Spark on k8s autopilot with timelines starting the driver
itself and moving from Pending to Running phase (Spark 4.3.1 with Java 11)

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Tue, 8 Aug 2023 at 15:49, kalyan <ju...@gmail.com> wrote:

> +1 to enhancements in DEA. Long time due!
>
> There were a few things that I was thinking along the same lines for some
> time now(few overlap with @holden 's points)
> 1. How to reduce wastage on the RM side? Sometimes the driver asks for
> some units of resources. But when RM provisions them, the driver cancels
> it.
> 2. How to make the resource available when it is needed.
> 3. Cost Vs AppRunTime: A good DEA algo should allow the developer to
> choose between cost and runtime. Sometimes developers might be ok to pay
> higher costs for faster execution.
> 4. Stitch resource profile choices into query execution.
> 5. Allow different DEA algo to be chosen for different queries within the
> same spark application.
> 6. Fall back to default algo, when things go haywire!
>
> Model-based learning would be awesome.
> These can be fine-tuned with some tools like sparklens.
>
> I am aware of a few experiments carried out in this area by my friends in
> this domain. One lesson we had was, it is hard to have a generic algorithm
> that worked for all cases.
>
> Regards
> kalyan.
>
>
> On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> Thanks for pointing out this feature to me. I will have a look when I get
>> there.
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:
>>
>>> Spark 3.5 have added an method `supportsReliableStorage`  in the `
>>> ShuffleDriverComponents` which indicate whether writing  shuffle data
>>> to a distributed filesystem or persisting it in a remote shuffle service.
>>>
>>> Uniffle is a general purpose remote shuffle service (
>>> https://github.com/apache/incubator-uniffle).  It can enhance the
>>> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>>> support the `ShuffleDriverComponents`.  you can see [1].
>>>
>>> If you have interest about more details of Uniffle, you can  see [2]
>>>
>>>
>>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>>
>>> [2]
>>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>>
>>>
>>>
>>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>>> *日期**: *2023年8月8日 星期二 06:53
>>> *抄送**: *dev <de...@spark.apache.org>
>>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for Spark 4+
>>>
>>>
>>>
>>> On the subject of dynamic allocation, is the following message a cause
>>> for concern when running Spark on k8s?
>>>
>>>
>>>
>>> INFO ExecutorAllocationManager: Dynamic allocation is enabled without a
>>> shuffle service.
>>>
>>>
>>> Mich Talebzadeh,
>>>
>>> Solutions Architect/Engineering Lead
>>>
>>> London
>>>
>>> United Kingdom
>>>
>>>
>>>
>>>    view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <mi...@gmail.com>
>>> wrote:
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> From what I have seen spark on a serverless cluster has hard up getting
>>> the driver going in a timely manner
>>>
>>>
>>>
>>> Annotations:  autopilot.gke.io/resource-adjustment:
>>>
>>>
>>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>>
>>>               autopilot.gke.io/warden-version: 2.7.41
>>>
>>>
>>>
>>> This is on spark 3.4.1 with Java 11 both the host running spark-submit
>>> and the docker itself
>>>
>>>
>>>
>>> I am not sure how relevant this is to this discussion but it looks like
>>> a kind of blocker for now. What config params can help here and what can be
>>> done?
>>>
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>> Mich Talebzadeh,
>>>
>>> Solutions Architect/Engineering Lead
>>>
>>> London
>>>
>>> United Kingdom
>>>
>>>
>>>
>>>    view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca> wrote:
>>>
>>> Oh great point
>>>
>>>
>>>
>>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:
>>>
>>> Thanks Holden for bringing this up!
>>>
>>>
>>>
>>> Maybe another thing to think about is how to make dynamic allocation
>>> more friendly with Kubernetes and disaggregated shuffle storage?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>> wrote:
>>>
>>> So I wondering if there is interesting in revisiting some of how Spark
>>> is doing it's dynamica allocation for Spark 4+?
>>>
>>>
>>>
>>> Some things that I've been thinking about:
>>>
>>>
>>>
>>> - Advisory user input (e.g. a way to say after X is done I know I need Y
>>> where Y might be a bunch of GPU machines)
>>>
>>> - Configurable tolerance (e.g. if we have at most Z% over target no-op)
>>>
>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>
>>> - Faster executor launches (I'm a little fuzzy on what we can do here
>>> but, one area for example is we setup and tear down an RPC connection to
>>> the driver with a blocking call which does seem to have some locking inside
>>> of the driver at first glance)
>>>
>>>
>>>
>>> Is this an area other folks are thinking about? Should I make an epic we
>>> can track ideas in? Or are folks generally happy with today's dynamic
>>> allocation (or just busy with other things)?
>>>
>>>
>>>
>>> --
>>>
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>> Books (Learning Spark, High Performance Spark, etc.):
>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>
>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>
>>> --
>>>
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>> Books (Learning Spark, High Performance Spark, etc.):
>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>
>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>
>>>

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by kalyan <ju...@gmail.com>.
+1 to enhancements in DEA. Long time due!

There were a few things that I was thinking along the same lines for some
time now(few overlap with @holden 's points)
1. How to reduce wastage on the RM side? Sometimes the driver asks for some
units of resources. But when RM provisions them, the driver cancels it.
2. How to make the resource available when it is needed.
3. Cost Vs AppRunTime: A good DEA algo should allow the developer to choose
between cost and runtime. Sometimes developers might be ok to pay higher
costs for faster execution.
4. Stitch resource profile choices into query execution.
5. Allow different DEA algo to be chosen for different queries within the
same spark application.
6. Fall back to default algo, when things go haywire!

Model-based learning would be awesome.
These can be fine-tuned with some tools like sparklens.

I am aware of a few experiments carried out in this area by my friends in
this domain. One lesson we had was, it is hard to have a generic algorithm
that worked for all cases.

Regards
kalyan.


On Tue, Aug 8, 2023 at 6:12 PM Mich Talebzadeh <mi...@gmail.com>
wrote:

> Thanks for pointing out this feature to me. I will have a look when I get
> there.
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:
>
>> Spark 3.5 have added an method `supportsReliableStorage`  in the `
>> ShuffleDriverComponents` which indicate whether writing  shuffle data to
>> a distributed filesystem or persisting it in a remote shuffle service.
>>
>> Uniffle is a general purpose remote shuffle service (
>> https://github.com/apache/incubator-uniffle).  It can enhance the
>> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
>> support the `ShuffleDriverComponents`.  you can see [1].
>>
>> If you have interest about more details of Uniffle, you can  see [2]
>>
>>
>> [1] https://github.com/apache/incubator-uniffle/issues/802.
>>
>> [2]
>> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>>
>>
>>
>> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
>> *日期**: *2023年8月8日 星期二 06:53
>> *抄送**: *dev <de...@spark.apache.org>
>> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for Spark 4+
>>
>>
>>
>> On the subject of dynamic allocation, is the following message a cause
>> for concern when running Spark on k8s?
>>
>>
>>
>> INFO ExecutorAllocationManager: Dynamic allocation is enabled without a
>> shuffle service.
>>
>>
>> Mich Talebzadeh,
>>
>> Solutions Architect/Engineering Lead
>>
>> London
>>
>> United Kingdom
>>
>>
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>>
>>
>>
>> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>
>>
>>
>> Hi,
>>
>>
>>
>> From what I have seen spark on a serverless cluster has hard up getting
>> the driver going in a timely manner
>>
>>
>>
>> Annotations:  autopilot.gke.io/resource-adjustment:
>>
>>
>> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>>
>>               autopilot.gke.io/warden-version: 2.7.41
>>
>>
>>
>> This is on spark 3.4.1 with Java 11 both the host running spark-submit
>> and the docker itself
>>
>>
>>
>> I am not sure how relevant this is to this discussion but it looks like a
>> kind of blocker for now. What config params can help here and what can be
>> done?
>>
>>
>>
>> Thanks
>>
>>
>>
>> Mich Talebzadeh,
>>
>> Solutions Architect/Engineering Lead
>>
>> London
>>
>> United Kingdom
>>
>>
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>>
>>
>>
>> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca> wrote:
>>
>> Oh great point
>>
>>
>>
>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:
>>
>> Thanks Holden for bringing this up!
>>
>>
>>
>> Maybe another thing to think about is how to make dynamic allocation more
>> friendly with Kubernetes and disaggregated shuffle storage?
>>
>>
>>
>>
>>
>>
>>
>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca> wrote:
>>
>> So I wondering if there is interesting in revisiting some of how Spark is
>> doing it's dynamica allocation for Spark 4+?
>>
>>
>>
>> Some things that I've been thinking about:
>>
>>
>>
>> - Advisory user input (e.g. a way to say after X is done I know I need Y
>> where Y might be a bunch of GPU machines)
>>
>> - Configurable tolerance (e.g. if we have at most Z% over target no-op)
>>
>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>
>> - Faster executor launches (I'm a little fuzzy on what we can do here
>> but, one area for example is we setup and tear down an RPC connection to
>> the driver with a blocking call which does seem to have some locking inside
>> of the driver at first glance)
>>
>>
>>
>> Is this an area other folks are thinking about? Should I make an epic we
>> can track ideas in? Or are folks generally happy with today's dynamic
>> allocation (or just busy with other things)?
>>
>>
>>
>> --
>>
>> Twitter: https://twitter.com/holdenkarau
>>
>> Books (Learning Spark, High Performance Spark, etc.):
>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>
>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>
>> --
>>
>> Twitter: https://twitter.com/holdenkarau
>>
>> Books (Learning Spark, High Performance Spark, etc.):
>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>
>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>
>>

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks for pointing out this feature to me. I will have a look when I get
there.

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Tue, 8 Aug 2023 at 11:44, roryqi(齐赫) <ro...@tencent.com> wrote:

> Spark 3.5 have added an method `supportsReliableStorage`  in the `
> ShuffleDriverComponents` which indicate whether writing  shuffle data to
> a distributed filesystem or persisting it in a remote shuffle service.
>
> Uniffle is a general purpose remote shuffle service (
> https://github.com/apache/incubator-uniffle).  It can enhance the
> experience of Spark on K8S. After Spark 3.5 is released, Uniffle will
> support the `ShuffleDriverComponents`.  you can see [1].
>
> If you have interest about more details of Uniffle, you can  see [2]
>
>
> [1] https://github.com/apache/incubator-uniffle/issues/802.
>
> [2]
> https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era
>
>
>
> *发件人**: *Mich Talebzadeh <mi...@gmail.com>
> *日期**: *2023年8月8日 星期二 06:53
> *抄送**: *dev <de...@spark.apache.org>
> *主题**: *[Internet]Re: Improving Dynamic Allocation Logic for Spark 4+
>
>
>
> On the subject of dynamic allocation, is the following message a cause for
> concern when running Spark on k8s?
>
>
>
> INFO ExecutorAllocationManager: Dynamic allocation is enabled without a
> shuffle service.
>
>
> Mich Talebzadeh,
>
> Solutions Architect/Engineering Lead
>
> London
>
> United Kingdom
>
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
>
>
>
> On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>
>
> Hi,
>
>
>
> From what I have seen spark on a serverless cluster has hard up getting
> the driver going in a timely manner
>
>
>
> Annotations:  autopilot.gke.io/resource-adjustment:
>
>
> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>
>               autopilot.gke.io/warden-version: 2.7.41
>
>
>
> This is on spark 3.4.1 with Java 11 both the host running spark-submit and
> the docker itself
>
>
>
> I am not sure how relevant this is to this discussion but it looks like a
> kind of blocker for now. What config params can help here and what can be
> done?
>
>
>
> Thanks
>
>
>
> Mich Talebzadeh,
>
> Solutions Architect/Engineering Lead
>
> London
>
> United Kingdom
>
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
>
>
>
> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca> wrote:
>
> Oh great point
>
>
>
> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:
>
> Thanks Holden for bringing this up!
>
>
>
> Maybe another thing to think about is how to make dynamic allocation more
> friendly with Kubernetes and disaggregated shuffle storage?
>
>
>
>
>
>
>
> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca> wrote:
>
> So I wondering if there is interesting in revisiting some of how Spark is
> doing it's dynamica allocation for Spark 4+?
>
>
>
> Some things that I've been thinking about:
>
>
>
> - Advisory user input (e.g. a way to say after X is done I know I need Y
> where Y might be a bunch of GPU machines)
>
> - Configurable tolerance (e.g. if we have at most Z% over target no-op)
>
> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>
> - Faster executor launches (I'm a little fuzzy on what we can do here but,
> one area for example is we setup and tear down an RPC connection to the
> driver with a blocking call which does seem to have some locking inside of
> the driver at first glance)
>
>
>
> Is this an area other folks are thinking about? Should I make an epic we
> can track ideas in? Or are folks generally happy with today's dynamic
> allocation (or just busy with other things)?
>
>
>
> --
>
> Twitter: https://twitter.com/holdenkarau
>
> Books (Learning Spark, High Performance Spark, etc.):
> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>
> --
>
> Twitter: https://twitter.com/holdenkarau
>
> Books (Learning Spark, High Performance Spark, etc.):
> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>
>

Re: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by "roryqi(齐赫)" <ro...@tencent.com.INVALID>.
Spark 3.5 have added an method `supportsReliableStorage`  in the `ShuffleDriverComponents` which indicate whether writing  shuffle data to a distributed filesystem or persisting it in a remote shuffle service.
Uniffle is a general purpose remote shuffle service (https://github.com/apache/incubator-uniffle).  It can enhance the experience of Spark on K8S. After Spark 3.5 is released, Uniffle will support the `ShuffleDriverComponents`.  you can see [1].
If you have interest about more details of Uniffle, you can  see [2]

[1] https://github.com/apache/incubator-uniffle/issues/802.
[2] https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era

发件人: Mich Talebzadeh <mi...@gmail.com>
日期: 2023年8月8日 星期二 06:53
抄送: dev <de...@spark.apache.org>
主题: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

On the subject of dynamic allocation, is the following message a cause for concern when running Spark on k8s?

INFO ExecutorAllocationManager: Dynamic allocation is enabled without a shuffle service.

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


 [https://ci3.googleusercontent.com/mail-sig/AIorK4zholKucR2Q9yMrKbHNn-o1TuS4mYXyi2KO6Xmx6ikHPySa9MLaLZ8t2hrA6AUcxSxDgHIwmKE]   view my Linkedin profile<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>

 https://en.everybodywiki.com/Mich_Talebzadeh



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.




On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <mi...@gmail.com>> wrote:

Hi,

From what I have seen spark on a serverless cluster has hard up getting the driver going in a timely manner

Annotations:  autopilot.gke.io/resource-adjustment<http://autopilot.gke.io/resource-adjustment>:
                {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
              autopilot.gke.io/warden-version<http://autopilot.gke.io/warden-version>: 2.7.41

This is on spark 3.4.1 with Java 11 both the host running spark-submit and the docker itself

I am not sure how relevant this is to this discussion but it looks like a kind of blocker for now. What config params can help here and what can be done?

Thanks

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


 [https://ci3.googleusercontent.com/mail-sig/AIorK4zholKucR2Q9yMrKbHNn-o1TuS4mYXyi2KO6Xmx6ikHPySa9MLaLZ8t2hrA6AUcxSxDgHIwmKE]   view my Linkedin profile<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>

 https://en.everybodywiki.com/Mich_Talebzadeh



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.




On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca>> wrote:
Oh great point

On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com>> wrote:
Thanks Holden for bringing this up!

Maybe another thing to think about is how to make dynamic allocation more friendly with Kubernetes and disaggregated shuffle storage?



On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>> wrote:
So I wondering if there is interesting in revisiting some of how Spark is doing it's dynamica allocation for Spark 4+?

Some things that I've been thinking about:

- Advisory user input (e.g. a way to say after X is done I know I need Y where Y might be a bunch of GPU machines)
- Configurable tolerance (e.g. if we have at most Z% over target no-op)
- Past runs of same job (e.g. stage X of job Y had a peak of K)
- Faster executor launches (I'm a little fuzzy on what we can do here but, one area for example is we setup and tear down an RPC connection to the driver with a blocking call which does seem to have some locking inside of the driver at first glance)

Is this an area other folks are thinking about? Should I make an epic we can track ideas in? Or are folks generally happy with today's dynamic allocation (or just busy with other things)?

--
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 <https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau
--
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 <https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau

Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Mich Talebzadeh <mi...@gmail.com>.
On the subject of dynamic allocation, is the following message a cause for
concern when running Spark on k8s?

INFO ExecutorAllocationManager: Dynamic allocation is enabled without a
shuffle service.

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh <mi...@gmail.com>
wrote:

>
> Hi,
>
> From what I have seen spark on a serverless cluster has hard up getting
> the driver going in a timely manner
>
> Annotations:  autopilot.gke.io/resource-adjustment:
>
> {"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
>               autopilot.gke.io/warden-version: 2.7.41
>
> This is on spark 3.4.1 with Java 11 both the host running spark-submit and
> the docker itself
>
> I am not sure how relevant this is to this discussion but it looks like a
> kind of blocker for now. What config params can help here and what can be
> done?
>
> Thanks
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca> wrote:
>
>> Oh great point
>>
>> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:
>>
>>> Thanks Holden for bringing this up!
>>>
>>> Maybe another thing to think about is how to make dynamic allocation
>>> more friendly with Kubernetes and disaggregated shuffle storage?
>>>
>>>
>>>
>>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca>
>>> wrote:
>>>
>>>> So I wondering if there is interesting in revisiting some of how Spark
>>>> is doing it's dynamica allocation for Spark 4+?
>>>>
>>>> Some things that I've been thinking about:
>>>>
>>>> - Advisory user input (e.g. a way to say after X is done I know I need
>>>> Y where Y might be a bunch of GPU machines)
>>>> - Configurable tolerance (e.g. if we have at most Z% over target no-op)
>>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>>> - Faster executor launches (I'm a little fuzzy on what we can do here
>>>> but, one area for example is we setup and tear down an RPC connection to
>>>> the driver with a blocking call which does seem to have some locking inside
>>>> of the driver at first glance)
>>>>
>>>> Is this an area other folks are thinking about? Should I make an epic
>>>> we can track ideas in? Or are folks generally happy with today's dynamic
>>>> allocation (or just busy with other things)?
>>>>
>>>> --
>>>> Twitter: https://twitter.com/holdenkarau
>>>> Books (Learning Spark, High Performance Spark, etc.):
>>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>>
>>> --
>> Twitter: https://twitter.com/holdenkarau
>> Books (Learning Spark, High Performance Spark, etc.):
>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>
>

Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi,

From what I have seen spark on a serverless cluster has hard up getting the
driver going in a timely manner

Annotations:  autopilot.gke.io/resource-adjustment:

{"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
              autopilot.gke.io/warden-version: 2.7.41

This is on spark 3.4.1 with Java 11 both the host running spark-submit and
the docker itself

I am not sure how relevant this is to this discussion but it looks like a
kind of blocker for now. What config params can help here and what can be
done?

Thanks

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 7 Aug 2023 at 22:39, Holden Karau <ho...@pigscanfly.ca> wrote:

> Oh great point
>
> On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:
>
>> Thanks Holden for bringing this up!
>>
>> Maybe another thing to think about is how to make dynamic allocation more
>> friendly with Kubernetes and disaggregated shuffle storage?
>>
>>
>>
>> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca> wrote:
>>
>>> So I wondering if there is interesting in revisiting some of how Spark
>>> is doing it's dynamica allocation for Spark 4+?
>>>
>>> Some things that I've been thinking about:
>>>
>>> - Advisory user input (e.g. a way to say after X is done I know I need Y
>>> where Y might be a bunch of GPU machines)
>>> - Configurable tolerance (e.g. if we have at most Z% over target no-op)
>>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>>> - Faster executor launches (I'm a little fuzzy on what we can do here
>>> but, one area for example is we setup and tear down an RPC connection to
>>> the driver with a blocking call which does seem to have some locking inside
>>> of the driver at first glance)
>>>
>>> Is this an area other folks are thinking about? Should I make an epic we
>>> can track ideas in? Or are folks generally happy with today's dynamic
>>> allocation (or just busy with other things)?
>>>
>>> --
>>> Twitter: https://twitter.com/holdenkarau
>>> Books (Learning Spark, High Performance Spark, etc.):
>>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>>
>> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.):
> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>

Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Holden Karau <ho...@pigscanfly.ca>.
Oh great point

On Mon, Aug 7, 2023 at 2:23 PM bo yang <bo...@gmail.com> wrote:

> Thanks Holden for bringing this up!
>
> Maybe another thing to think about is how to make dynamic allocation more
> friendly with Kubernetes and disaggregated shuffle storage?
>
>
>
> On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca> wrote:
>
>> So I wondering if there is interesting in revisiting some of how Spark is
>> doing it's dynamica allocation for Spark 4+?
>>
>> Some things that I've been thinking about:
>>
>> - Advisory user input (e.g. a way to say after X is done I know I need Y
>> where Y might be a bunch of GPU machines)
>> - Configurable tolerance (e.g. if we have at most Z% over target no-op)
>> - Past runs of same job (e.g. stage X of job Y had a peak of K)
>> - Faster executor launches (I'm a little fuzzy on what we can do here
>> but, one area for example is we setup and tear down an RPC connection to
>> the driver with a blocking call which does seem to have some locking inside
>> of the driver at first glance)
>>
>> Is this an area other folks are thinking about? Should I make an epic we
>> can track ideas in? Or are folks generally happy with today's dynamic
>> allocation (or just busy with other things)?
>>
>> --
>> Twitter: https://twitter.com/holdenkarau
>> Books (Learning Spark, High Performance Spark, etc.):
>> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
>> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>>
> --
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.):
https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau

Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by bo yang <bo...@gmail.com>.
Thanks Holden for bringing this up!

Maybe another thing to think about is how to make dynamic allocation more
friendly with Kubernetes and disaggregated shuffle storage?



On Mon, Aug 7, 2023 at 1:27 PM Holden Karau <ho...@pigscanfly.ca> wrote:

> So I wondering if there is interesting in revisiting some of how Spark is
> doing it's dynamica allocation for Spark 4+?
>
> Some things that I've been thinking about:
>
> - Advisory user input (e.g. a way to say after X is done I know I need Y
> where Y might be a bunch of GPU machines)
> - Configurable tolerance (e.g. if we have at most Z% over target no-op)
> - Past runs of same job (e.g. stage X of job Y had a peak of K)
> - Faster executor launches (I'm a little fuzzy on what we can do here but,
> one area for example is we setup and tear down an RPC connection to the
> driver with a blocking call which does seem to have some locking inside of
> the driver at first glance)
>
> Is this an area other folks are thinking about? Should I make an epic we
> can track ideas in? Or are folks generally happy with today's dynamic
> allocation (or just busy with other things)?
>
> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.):
> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>

Re: Improving Dynamic Allocation Logic for Spark 4+

Posted by Thomas Graves <tg...@gmail.com>.
> > - Advisory user input (e.g. a way to say after X is done I know I need Y where Y might be a bunch of GPU machines)

You are thinking of something more advanced than the Stage Level
Scheduling?  Or perhaps configured differently or prestarting things
you know you will need?

Tom

On Mon, Aug 7, 2023 at 3:27 PM Holden Karau <ho...@pigscanfly.ca> wrote:
>
> So I wondering if there is interesting in revisiting some of how Spark is doing it's dynamica allocation for Spark 4+?
>
> Some things that I've been thinking about:
>
> - Advisory user input (e.g. a way to say after X is done I know I need Y where Y might be a bunch of GPU machines)
> - Configurable tolerance (e.g. if we have at most Z% over target no-op)
> - Past runs of same job (e.g. stage X of job Y had a peak of K)
> - Faster executor launches (I'm a little fuzzy on what we can do here but, one area for example is we setup and tear down an RPC connection to the driver with a blocking call which does seem to have some locking inside of the driver at first glance)
>
> Is this an area other folks are thinking about? Should I make an epic we can track ideas in? Or are folks generally happy with today's dynamic allocation (or just busy with other things)?
>
> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org