You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2022/10/13 18:45:54 UTC

[GitHub] [druid] churromorales commented on a diff in pull request #13156: Support for middle manager less druid, tasks launch as k8s jobs

churromorales commented on code in PR #13156:
URL: https://github.com/apache/druid/pull/13156#discussion_r995000617


##########
docs/development/extensions-contrib/k8s-jobs.md:
##########
@@ -0,0 +1,125 @@
+---
+id: k8s-jobs
+title: "MM-less Druid in K8s"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Consider this an [EXPERIMENTAL](../experimental.md) feature mostly because it has not been tested yet on a wide variety of long-running Druid clusters.
+
+Apache Druid Extension to enable using Kubernetes for launching and managing tasks instead of the Middle Managers.  This extension allows you to launch tasks as K8s jobs removing the need for your middle manager.  
+
+## How it works
+
+It takes the podSpec of your `Overlord` pod and creates a kubernetes job from this podSpec.  Thus if you have sidecars such as splunk, hubble, istio it can optionally launch a task as a k8s job.  All jobs are natively restorable, they are decopled from the druid deployment, thus restarting pods or doing upgrades has no affect on tasks in flight.  They will continue to run and when the overlord comes back up it will start tracking them again.  

Review Comment:
   So the reason for the podSpec is that I didn't want to take a brand new docker image and create a job spec for it.  I don't know your cluster config.  Suppose you have secrets mounted, volumes setup up your way, env variables, certs, annotations for things like istio, etc... The parent pod spec logic is to  ensure you have those things for your peon task and you don't have to worry about it.  But, I do take that pod spec and massage it.  I do things like change the resources required.  For CPU we always give the task a single core (just like what druid was always doing).  For memory we take the jvm opts you pass and configure the container resources from that.  I believe its something like (Xmx + dbb)*1.2.  So while the cpu resources are fixed at a core, the memory is deduced from your jvm opts.  I want to make sure I answered your question correctly here, please let me know things don't make sense. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org